[Python-modules-commits] [sqlalchemy] 01/02: Import sqlalchemy_1.0.9+ds1.orig.tar.gz

Piotr Ożarowski piotr at moszumanska.debian.org
Sat Oct 31 22:25:32 UTC 2015


This is an automated email from the git hooks/post-receive script.

piotr pushed a commit to branch master
in repository sqlalchemy.

commit 27ee598e0a73734c0f14e1b49e6140ab3e3bf3b1
Author: Piotr Ożarowski <piotr at debian.org>
Date:   Sat Oct 31 23:11:25 2015 +0100

    Import sqlalchemy_1.0.9+ds1.orig.tar.gz
---
 PKG-INFO                                      |    2 +-
 doc/build/changelog/changelog_09.rst          |   11 +
 doc/build/changelog/changelog_10.rst          |  121 +++
 doc/build/conf.py                             |    4 +-
 doc/build/core/ddl.rst                        |  240 +++--
 doc/build/core/events.rst                     |    4 -
 doc/build/core/tutorial.rst                   |   31 +-
 doc/build/faq/connections.rst                 |   81 ++
 doc/build/faq/sessions.rst                    |   71 ++
 doc/build/orm/events.rst                      |   10 +-
 doc/build/orm/examples.rst                    |    2 +
 doc/build/orm/extensions/associationproxy.rst |    1 +
 doc/build/orm/inheritance.rst                 |    6 +-
 doc/build/orm/relationship_persistence.rst    |  122 ++-
 doc/build/orm/session.rst                     |    1 +
 doc/build/orm/session_events.rst              |  218 ++++
 doc/build/orm/session_state_management.rst    |   57 +-
 doc/build/orm/session_transaction.rst         |    9 +-
 doc/build/orm/tutorial.rst                    |   55 +-
 examples/versioned_history/history_meta.py    |    8 +-
 examples/versioned_history/test_versioning.py |   65 ++
 lib/sqlalchemy/__init__.py                    |    2 +-
 lib/sqlalchemy/dialects/mssql/base.py         |    2 +-
 lib/sqlalchemy/dialects/oracle/base.py        |    3 +
 lib/sqlalchemy/dialects/oracle/cx_oracle.py   |    7 +-
 lib/sqlalchemy/dialects/postgresql/base.py    |   49 +-
 lib/sqlalchemy/dialects/sybase/base.py        |   15 +-
 lib/sqlalchemy/engine/__init__.py             |   25 +-
 lib/sqlalchemy/ext/associationproxy.py        |    9 +-
 lib/sqlalchemy/ext/baked.py                   |   20 +
 lib/sqlalchemy/ext/hybrid.py                  |    2 +-
 lib/sqlalchemy/orm/attributes.py              |    5 +
 lib/sqlalchemy/orm/events.py                  |  397 ++++---
 lib/sqlalchemy/orm/identity.py                |   12 +
 lib/sqlalchemy/orm/mapper.py                  |   45 +-
 lib/sqlalchemy/orm/persistence.py             |   10 +-
 lib/sqlalchemy/orm/query.py                   |   89 +-
 lib/sqlalchemy/orm/relationships.py           |   38 +-
 lib/sqlalchemy/orm/session.py                 |   19 +-
 lib/sqlalchemy/orm/state.py                   |    8 +-
 lib/sqlalchemy/orm/strategies.py              |    5 +-
 lib/sqlalchemy/orm/strategy_options.py        |    2 +-
 lib/sqlalchemy/sql/crud.py                    |    1 +
 lib/sqlalchemy/sql/selectable.py              |    2 +-
 lib/sqlalchemy/sql/sqltypes.py                |    3 -
 test/dialect/postgresql/test_query.py         |  610 ++++++-----
 test/dialect/postgresql/test_reflection.py    |    1 +
 test/dialect/test_oracle.py                   |   26 +
 test/ext/test_associationproxy.py             |   20 +
 test/ext/test_baked.py                        |   49 +-
 test/orm/inheritance/test_poly_persistence.py |   38 +-
 test/orm/test_bulk.py                         |   54 +-
 test/orm/test_events.py                       |   37 +
 test/orm/test_mapper.py                       | 1248 ++++++++++++----------
 test/orm/test_options.py                      |   12 +-
 test/orm/test_query.py                        |   54 +
 test/orm/test_versioning.py                   |   88 ++
 test/sql/test_defaults.py                     |   37 +-
 test/sql/test_insert.py                       |   31 +-
 test/sql/test_insert_exec.py                  |  445 ++++++++
 test/sql/test_query.py                        | 1420 +------------------------
 test/sql/test_resultset.py                    | 1136 ++++++++++++++++++++
 62 files changed, 4473 insertions(+), 2722 deletions(-)

diff --git a/PKG-INFO b/PKG-INFO
index 2921224..fb1c2a9 100644
--- a/PKG-INFO
+++ b/PKG-INFO
@@ -1,6 +1,6 @@
 Metadata-Version: 1.1
 Name: SQLAlchemy
-Version: 1.0.8
+Version: 1.0.9
 Summary: Database Abstraction Library
 Home-page: http://www.sqlalchemy.org
 Author: Mike Bayer
diff --git a/doc/build/changelog/changelog_09.rst b/doc/build/changelog/changelog_09.rst
index 9d781fa..be88729 100644
--- a/doc/build/changelog/changelog_09.rst
+++ b/doc/build/changelog/changelog_09.rst
@@ -15,6 +15,17 @@
     :version: 0.9.11
 
     .. change::
+        :tags: bug, oracle, py3k
+        :tickets: 3491
+        :versions: 1.1.0b1, 1.0.9
+
+        Fixed support for cx_Oracle version 5.2, which was tripping
+        up SQLAlchemy's version detection under Python 3 and inadvertently
+        not using the correct unicode mode for Python 3.  This would cause
+        issues such as bound variables mis-interpreted as NULL and rows
+        silently not being returned.
+
+    .. change::
         :tags: bug, engine
         :tickets: 3497
         :versions: 1.0.8
diff --git a/doc/build/changelog/changelog_10.rst b/doc/build/changelog/changelog_10.rst
index 2dce07b..2442ac7 100644
--- a/doc/build/changelog/changelog_10.rst
+++ b/doc/build/changelog/changelog_10.rst
@@ -16,6 +16,127 @@
         :start-line: 5
 
 .. changelog::
+    :version: 1.0.9
+    :released: October 20, 2015
+
+    .. change::
+        :tags: bug, orm, postgresql
+        :versions: 1.1.0b1
+        :tickets: 3556
+
+        Fixed regression in 1.0 where new feature of using "executemany"
+        for UPDATE statements in the ORM (e.g. :ref:`feature_updatemany`)
+        would break on Postgresql and other RETURNING backends
+        when using server-side version generation
+        schemes, as the server side value is retrieved via RETURNING which
+        is not supported with executemany.
+
+    .. change::
+        :tags: feature, ext
+        :versions: 1.1.0b1
+        :tickets: 3551
+
+        Added the :paramref:`.AssociationProxy.info` parameter to the
+        :class:`.AssociationProxy` constructor, to suit the
+        :attr:`.AssociationProxy.info` accessor that was added in
+        :ticket:`2971`.  This is possible because :class:`.AssociationProxy`
+        is constructed explicitly, unlike a hybrid which is constructed
+        implicitly via the decorator syntax.
+
+    .. change::
+        :tags: bug, oracle
+        :versions: 1.1.0b1
+        :tickets: 3548
+
+        Fixed bug in Oracle dialect where reflection of tables and other
+        symbols with names quoted to force all-lower-case would not be
+        identified properly in reflection queries.  The :class:`.quoted_name`
+        construct is now applied to incoming symbol names that detect as
+        forced into all-lower-case within the "name normalize" process.
+
+    .. change::
+        :tags: feature, orm
+        :versions: 1.1.0b1
+        :pullreq: github:201
+
+        Added new method :meth:`.Query.one_or_none`; same as
+        :meth:`.Query.one` but returns None if no row found.  Pull request
+        courtesy esiegerman.
+
+    .. change::
+        :tags: bug, orm
+        :versions: 1.1.0b1
+        :tickets: 3539
+
+        Fixed rare TypeError which could occur when stringifying certain
+        kinds of internal column loader options within internal logging.
+
+    .. change::
+        :tags: bug, orm
+        :versions: 1.1.0b1
+        :tickets: 3525
+
+        Fixed bug in :meth:`.Session.bulk_save_objects` where a mapped
+        column that had some kind of "fetch on update" value and was not
+        locally present in the given object would cause an AttributeError
+        within the operation.
+
+    .. change::
+        :tags: bug, sql
+        :versions: 1.1.0b1
+        :tickets: 3520
+
+        Fixed regression in 1.0-released default-processor for multi-VALUES
+        insert statement, :ticket:`3288`, where the column type for the
+        default-holding column would not be propagated to the compiled
+        statement in the case where the default was being used,
+        leading to bind-level type handlers not being invoked.
+
+    .. change::
+        :tags: bug, examples
+        :versions: 1.1.0b1
+
+        Fixed two issues in the "history_meta" example where history tracking
+        could encounter empty history, and where a column keyed to an alternate
+        attribute name would fail to track properly.  Fixes courtesy
+        Alex Fraser.
+
+    .. change::
+        :tags: bug, orm
+        :tickets: 3510
+        :versions: 1.1.0b1
+
+        Fixed 1.0 regression where the "noload" loader strategy would fail
+        to function for a many-to-one relationship.  The loader used an
+        API to place "None" into the dictionary which no longer actually
+        writes a value; this is a side effect of :ticket:`3061`.
+
+    .. change::
+        :tags: bug, sybase
+        :tickets: 3508, 3509
+        :versions: 1.1.0b1
+
+        Fixed two issues regarding Sybase reflection, allowing tables
+        without primary keys to be reflected as well as ensured that
+        a SQL statement involved in foreign key detection is pre-fetched up
+        front to avoid driver issues upon nested queries.  Fixes here
+        courtesy Eugene Zapolsky; note that we cannot currently test
+        Sybase to locally verify these changes.
+
+    .. change::
+        :tags: bug, postgresql
+        :pullreq: github:190
+        :versions: 1.1.0b1
+
+        An adjustment to the new Postgresql feature of reflecting storage
+        options and USING of :ticket:`3455` released in 1.0.6,
+        to disable the feature for Postgresql versions < 8.2 where the
+        ``reloptions`` column is not provided; this allows Amazon Redshift
+        to again work as it is based on an 8.0.x version of Postgresql.
+        Fix courtesy Pete Hollobon.
+
+
+.. changelog::
     :version: 1.0.8
     :released: July 22, 2015
 
diff --git a/doc/build/conf.py b/doc/build/conf.py
index f1ebda4..4cc4eb1 100644
--- a/doc/build/conf.py
+++ b/doc/build/conf.py
@@ -138,9 +138,9 @@ copyright = u'2007-2015, the SQLAlchemy authors and contributors'
 # The short X.Y version.
 version = "1.0"
 # The full version, including alpha/beta/rc tags.
-release = "1.0.8"
+release = "1.0.9"
 
-release_date = "July 22, 2015"
+release_date = "October 20, 2015"
 
 site_base = os.environ.get("RTD_SITE_BASE", "http://www.sqlalchemy.org")
 site_adapter_template = "docs_adapter.mako"
diff --git a/doc/build/core/ddl.rst b/doc/build/core/ddl.rst
index 0ba2f28..820ba7b 100644
--- a/doc/build/core/ddl.rst
+++ b/doc/build/core/ddl.rst
@@ -20,85 +20,100 @@ required, SQLAlchemy offers two techniques which can be used to add any DDL
 based on any condition, either accompanying the standard generation of tables
 or by itself.
 
-.. _schema_ddl_sequences:
-
-Controlling DDL Sequences
--------------------------
+Custom DDL
+----------
 
-The ``sqlalchemy.schema`` package contains SQL expression constructs that
-provide DDL expressions. For example, to produce a ``CREATE TABLE`` statement:
+Custom DDL phrases are most easily achieved using the
+:class:`~sqlalchemy.schema.DDL` construct. This construct works like all the
+other DDL elements except it accepts a string which is the text to be emitted:
 
 .. sourcecode:: python+sql
 
-    from sqlalchemy.schema import CreateTable
-    {sql}engine.execute(CreateTable(mytable))
-    CREATE TABLE mytable (
-        col1 INTEGER,
-        col2 INTEGER,
-        col3 INTEGER,
-        col4 INTEGER,
-        col5 INTEGER,
-        col6 INTEGER
-    ){stop}
+    event.listen(
+        metadata,
+        "after_create",
+        DDL("ALTER TABLE users ADD CONSTRAINT "
+            "cst_user_name_length "
+            " CHECK (length(user_name) >= 8)")
+    )
 
-Above, the :class:`~sqlalchemy.schema.CreateTable` construct works like any
-other expression construct (such as ``select()``, ``table.insert()``, etc.). A
-full reference of available constructs is in :ref:`schema_api_ddl`.
+A more comprehensive method of creating libraries of DDL constructs is to use
+custom compilation - see :ref:`sqlalchemy.ext.compiler_toplevel` for
+details.
 
-The DDL constructs all extend a common base class which provides the
-capability to be associated with an individual
-:class:`~sqlalchemy.schema.Table` or :class:`~sqlalchemy.schema.MetaData`
-object, to be invoked upon create/drop events. Consider the example of a table
-which contains a CHECK constraint:
 
-.. sourcecode:: python+sql
+.. _schema_ddl_sequences:
+
+Controlling DDL Sequences
+-------------------------
 
-    users = Table('users', metadata,
-                   Column('user_id', Integer, primary_key=True),
-                   Column('user_name', String(40), nullable=False),
-                   CheckConstraint('length(user_name) >= 8',name="cst_user_name_length")
-                   )
+The :class:`~.schema.DDL` construct introduced previously also has the
+ability to be invoked conditionally based on inspection of the
+database.  This feature is available using the :meth:`.DDLElement.execute_if`
+method.  For example, if we wanted to create a trigger but only on
+the Postgresql backend, we could invoke this as::
 
-    {sql}users.create(engine)
-    CREATE TABLE users (
-        user_id SERIAL NOT NULL,
-        user_name VARCHAR(40) NOT NULL,
-        PRIMARY KEY (user_id),
-        CONSTRAINT cst_user_name_length  CHECK (length(user_name) >= 8)
-    ){stop}
+    mytable = Table(
+        'mytable', metadata,
+        Column('id', Integer, primary_key=True),
+        Column('data', String(50))
+    )
 
-The above table contains a column "user_name" which is subject to a CHECK
-constraint that validates that the length of the string is at least eight
-characters. When a ``create()`` is issued for this table, DDL for the
-:class:`~sqlalchemy.schema.CheckConstraint` will also be issued inline within
-the table definition.
+    trigger = DDL(
+        "CREATE TRIGGER dt_ins BEFORE INSERT ON mytable "
+        "FOR EACH ROW BEGIN SET NEW.data='ins'; END"
+    )
 
-The :class:`~sqlalchemy.schema.CheckConstraint` construct can also be
-constructed externally and associated with the
-:class:`~sqlalchemy.schema.Table` afterwards::
+    event.listen(
+        mytable,
+        'after_create',
+        trigger.execute_if(dialect='postgresql')
+    )
+
+The :paramref:`.DDLElement.execute_if.dialect` keyword also accepts a tuple
+of string dialect names::
 
-    constraint = CheckConstraint('length(user_name) >= 8',name="cst_user_name_length")
-    users.append_constraint(constraint)
+    event.listen(
+        mytable,
+        "after_create",
+        trigger.execute_if(dialect=('postgresql', 'mysql'))
+    )
+    event.listen(
+        mytable,
+        "before_drop",
+        trigger.execute_if(dialect=('postgresql', 'mysql'))
+    )
 
-So far, the effect is the same. However, if we create DDL elements
-corresponding to the creation and removal of this constraint, and associate
-them with the :class:`.Table` as events, these new events
-will take over the job of issuing DDL for the constraint. Additionally, the
-constraint will be added via ALTER:
+The :meth:`.DDLElement.execute_if` method can also work against a callable
+function that will receive the database connection in use.  In the
+example below, we use this to conditionally create a CHECK constraint,
+first looking within the Postgresql catalogs to see if it exists:
 
 .. sourcecode:: python+sql
 
-    from sqlalchemy import event
+    def should_create(ddl, target, connection, **kw):
+        row = connection.execute(
+            "select conname from pg_constraint where conname='%s'" %
+            ddl.element.name).scalar()
+        return not bool(row)
+
+    def should_drop(ddl, target, connection, **kw):
+        return not should_create(ddl, target, connection, **kw)
 
     event.listen(
         users,
         "after_create",
-        AddConstraint(constraint)
+        DDL(
+            "ALTER TABLE users ADD CONSTRAINT "
+            "cst_user_name_length CHECK (length(user_name) >= 8)"
+        ).execute_if(callable_=should_create)
     )
     event.listen(
         users,
         "before_drop",
-        DropConstraint(constraint)
+        DDL(
+            "ALTER TABLE users DROP CONSTRAINT cst_user_name_length"
+        ).execute_if(callable_=should_drop)
     )
 
     {sql}users.create(engine)
@@ -108,61 +123,67 @@ constraint will be added via ALTER:
         PRIMARY KEY (user_id)
     )
 
+    select conname from pg_constraint where conname='cst_user_name_length'
     ALTER TABLE users ADD CONSTRAINT cst_user_name_length  CHECK (length(user_name) >= 8){stop}
 
     {sql}users.drop(engine)
+    select conname from pg_constraint where conname='cst_user_name_length'
     ALTER TABLE users DROP CONSTRAINT cst_user_name_length
     DROP TABLE users{stop}
 
-The real usefulness of the above becomes clearer once we illustrate the
-:meth:`.DDLElement.execute_if` method.  This method returns a modified form of
-the DDL callable which will filter on criteria before responding to a
-received event.   It accepts a parameter ``dialect``, which is the string
-name of a dialect or a tuple of such, which will limit the execution of the
-item to just those dialects.  It also accepts a ``callable_`` parameter which
-may reference a Python callable which will be invoked upon event reception,
-returning ``True`` or ``False`` indicating if the event should proceed.
-
-If our :class:`~sqlalchemy.schema.CheckConstraint` was only supported by
-Postgresql and not other databases, we could limit its usage to just that dialect::
+Using the built-in DDLElement Classes
+--------------------------------------
 
-    event.listen(
-        users,
-        'after_create',
-        AddConstraint(constraint).execute_if(dialect='postgresql')
-    )
-    event.listen(
-        users,
-        'before_drop',
-        DropConstraint(constraint).execute_if(dialect='postgresql')
-    )
+The ``sqlalchemy.schema`` package contains SQL expression constructs that
+provide DDL expressions. For example, to produce a ``CREATE TABLE`` statement:
 
-Or to any set of dialects::
+.. sourcecode:: python+sql
 
-    event.listen(
-        users,
-        "after_create",
-        AddConstraint(constraint).execute_if(dialect=('postgresql', 'mysql'))
-    )
-    event.listen(
-        users,
-        "before_drop",
-        DropConstraint(constraint).execute_if(dialect=('postgresql', 'mysql'))
-    )
+    from sqlalchemy.schema import CreateTable
+    {sql}engine.execute(CreateTable(mytable))
+    CREATE TABLE mytable (
+        col1 INTEGER,
+        col2 INTEGER,
+        col3 INTEGER,
+        col4 INTEGER,
+        col5 INTEGER,
+        col6 INTEGER
+    ){stop}
 
-When using a callable, the callable is passed the ddl element, the
-:class:`.Table` or :class:`.MetaData`
-object whose "create" or "drop" event is in progress, and the
-:class:`.Connection` object being used for the
-operation, as well as additional information as keyword arguments. The
-callable can perform checks, such as whether or not a given item already
-exists. Below we define ``should_create()`` and ``should_drop()`` callables
-that check for the presence of our named constraint:
+Above, the :class:`~sqlalchemy.schema.CreateTable` construct works like any
+other expression construct (such as ``select()``, ``table.insert()``, etc.).
+All of SQLAlchemy's DDL oriented constructs are subclasses of
+the :class:`.DDLElement` base class; this is the base of all the
+objects corresponding to CREATE and DROP as well as ALTER,
+not only in SQLAlchemy but in Alembic Migrations as well.
+A full reference of available constructs is in :ref:`schema_api_ddl`.
+
+User-defined DDL constructs may also be created as subclasses of
+:class:`.DDLElement` itself.   The documentation in
+:ref:`sqlalchemy.ext.compiler_toplevel` has several examples of this.
+
+The event-driven DDL system described in the previous section
+:ref:`schema_ddl_sequences` is available with other :class:`.DDLElement`
+objects as well.  However, when dealing with the built-in constructs
+such as :class:`.CreateIndex`, :class:`.CreateSequence`, etc, the event
+system is of **limited** use, as methods like :meth:`.Table.create` and
+:meth:`.MetaData.create_all` will invoke these constructs unconditionally.
+In a future SQLAlchemy release, the DDL event system including conditional
+execution will taken into account for built-in constructs that currently
+invoke in all cases.
+
+We can illustrate an event-driven
+example with the :class:`.AddConstraint` and :class:`.DropConstraint`
+constructs, as the event-driven system will work for CHECK and UNIQUE
+constraints, using these as we did in our previous example of
+:meth:`.DDLElement.execute_if`:
 
 .. sourcecode:: python+sql
 
     def should_create(ddl, target, connection, **kw):
-        row = connection.execute("select conname from pg_constraint where conname='%s'" % ddl.element.name).scalar()
+        row = connection.execute(
+            "select conname from pg_constraint where conname='%s'" %
+            ddl.element.name).scalar()
         return not bool(row)
 
     def should_drop(ddl, target, connection, **kw):
@@ -194,26 +215,12 @@ that check for the presence of our named constraint:
     ALTER TABLE users DROP CONSTRAINT cst_user_name_length
     DROP TABLE users{stop}
 
-Custom DDL
-----------
-
-Custom DDL phrases are most easily achieved using the
-:class:`~sqlalchemy.schema.DDL` construct. This construct works like all the
-other DDL elements except it accepts a string which is the text to be emitted:
-
-.. sourcecode:: python+sql
-
-    event.listen(
-        metadata,
-        "after_create",
-        DDL("ALTER TABLE users ADD CONSTRAINT "
-            "cst_user_name_length "
-            " CHECK (length(user_name) >= 8)")
-    )
-
-A more comprehensive method of creating libraries of DDL constructs is to use
-custom compilation - see :ref:`sqlalchemy.ext.compiler_toplevel` for
-details.
+While the above example is against the built-in :class:`.AddConstraint`
+and :class:`.DropConstraint` objects, the main usefulness of DDL events
+for now remains focused on the use of the :class:`.DDL` construct itself,
+as well as with user-defined subclasses of :class:`.DDLElement` that aren't
+already part of the :meth:`.MetaData.create_all`, :meth:`.Table.create`,
+and corresponding "drop" processes.
 
 .. _schema_api_ddl:
 
@@ -233,6 +240,7 @@ DDL Expression Constructs API
     :members:
     :undoc-members:
 
+.. autoclass:: _CreateDropBase
 
 .. autoclass:: CreateTable
     :members:
diff --git a/doc/build/core/events.rst b/doc/build/core/events.rst
index d19b910..451cb94 100644
--- a/doc/build/core/events.rst
+++ b/doc/build/core/events.rst
@@ -11,10 +11,6 @@ ORM events are described in :ref:`orm_event_toplevel`.
 .. autoclass:: sqlalchemy.event.base.Events
    :members:
 
-.. versionadded:: 0.7
-    The event system supersedes the previous system of "extension", "listener",
-    and "proxy" classes.
-
 Connection Pool Events
 -----------------------
 
diff --git a/doc/build/core/tutorial.rst b/doc/build/core/tutorial.rst
index cc2a976..e660e2d 100644
--- a/doc/build/core/tutorial.rst
+++ b/doc/build/core/tutorial.rst
@@ -754,8 +754,8 @@ method calls is called :term:`method chaining`.
 
 .. _sqlexpression_text:
 
-Using Text
-===========
+Using Textual SQL
+=================
 
 Our last example really became a handful to type. Going from what one
 understands to be a textual SQL expression into a Python construct which
@@ -794,7 +794,27 @@ construct using the :meth:`~.TextClause.bindparams` method; if we are
 using datatypes that need special handling as they are received in Python,
 or we'd like to compose our :func:`~.expression.text` object into a larger
 expression, we may also wish to use the :meth:`~.TextClause.columns` method
-in order to specify column return types and names.
+in order to specify column return types and names:
+
+.. sourcecode:: pycon+sql
+
+    >>> s = text(
+    ...     "SELECT users.fullname || ', ' || addresses.email_address AS title "
+    ...         "FROM users, addresses "
+    ...         "WHERE users.id = addresses.user_id "
+    ...         "AND users.name BETWEEN :x AND :y "
+    ...         "AND (addresses.email_address LIKE :e1 "
+    ...             "OR addresses.email_address LIKE :e2)")
+    >>> s = s.columns(title=String)
+    >>> s = s.bindparams(x='m', y='z', e1='%@aol.com', e2='%@msn.com')
+    >>> conn.execute(s).fetchall() # doctest:+NORMALIZE_WHITESPACE
+    SELECT users.fullname || ', ' || addresses.email_address AS title
+    FROM users, addresses
+    WHERE users.id = addresses.user_id AND users.name BETWEEN ? AND ? AND
+    (addresses.email_address LIKE ? OR addresses.email_address LIKE ?)
+    ('m', 'z', '%@aol.com', '%@msn.com')
+    {stop}[(u'Wendy Williams, wendy at aol.com',)]
+
 
 :func:`~.expression.text` can also be used freely within a
 :func:`~.expression.select` object, which accepts :func:`~.expression.text`
@@ -841,6 +861,11 @@ need to refer to any pre-established :class:`.Table` metadata:
     the less flexibility and ability for manipulation/transformation
     the statement will have.
 
+.. seealso::
+
+    :ref:`orm_tutorial_literal_sql` - integrating ORM-level queries with
+    :func:`.text`
+
 .. versionchanged:: 1.0.0
    The :func:`.select` construct emits warnings when string SQL
    fragments are coerced to :func:`.text`, and :func:`.text` should
diff --git a/doc/build/faq/connections.rst b/doc/build/faq/connections.rst
index 81a8678..658b4f7 100644
--- a/doc/build/faq/connections.rst
+++ b/doc/build/faq/connections.rst
@@ -136,3 +136,84 @@ when :meth:`.Connection.close` is called::
     conn.detach()  # detaches the DBAPI connection from the connection pool
     conn.connection.<go nuts>
     conn.close()  # connection is closed for real, the pool replaces it with a new connection
+
+How do I use engines / connections / sessions with Python multiprocessing, or os.fork()?
+----------------------------------------------------------------------------------------
+
+The key goal with multiple python processes is to prevent any database connections
+from being shared across processes.   Depending on specifics of the driver and OS,
+the issues that arise here range from non-working connections to socket connections that
+are used by multiple processes concurrently, leading to broken messaging (the latter
+case is typically the most common).
+
+The SQLAlchemy :class:`.Engine` object refers to a connection pool of existing
+database connections.  So when this object is replicated to a child process,
+the goal is to ensure that no database connections are carried over.  There
+are three general approaches to this:
+
+1. Disable pooling using :class:`.NullPool`.  This is the most simplistic,
+   one shot system that prevents the :class:`.Engine` from using any connection
+   more than once.
+
+2. Call :meth:`.Engine.dispose` on any given :class:`.Engine` as soon one is
+   within the new process.  In Python multiprocessing, constructs such as
+   ``multiprocessing.Pool`` include "initializer" hooks which are a place
+   that this can be performed; otherwise at the top of where ``os.fork()``
+   or where the ``Process`` object begins the child fork, a single call
+   to :meth:`.Engine.dispose` will ensure any remaining connections are flushed.
+
+3. An event handler can be applied to the connection pool that tests for connections
+   being shared across process boundaries, and invalidates them.  This looks like
+   the following::
+
+        import os
+        import warnings
+
+        from sqlalchemy import event
+        from sqlalchemy import exc
+
+        def add_engine_pidguard(engine):
+            """Add multiprocessing guards.
+
+            Forces a connection to be reconnected if it is detected
+            as having been shared to a sub-process.
+
+            """
+
+            @event.listens_for(engine, "connect")
+            def connect(dbapi_connection, connection_record):
+                connection_record.info['pid'] = os.getpid()
+
+            @event.listens_for(engine, "checkout")
+            def checkout(dbapi_connection, connection_record, connection_proxy):
+                pid = os.getpid()
+                if connection_record.info['pid'] != pid:
+                    # substitute log.debug() or similar here as desired
+                    warnings.warn(
+                        "Parent process %(orig)s forked (%(newproc)s) with an open "
+                        "database connection, "
+                        "which is being discarded and recreated." %
+                        {"newproc": pid, "orig": connection_record.info['pid']})
+                    connection_record.connection = connection_proxy.connection = None
+                    raise exc.DisconnectionError(
+                        "Connection record belongs to pid %s, "
+                        "attempting to check out in pid %s" %
+                        (connection_record.info['pid'], pid)
+                    )
+
+   These events are applied to an :class:`.Engine` as soon as its created::
+
+        engine = create_engine("...")
+
+        add_engine_pidguard(engine)
+
+The above strategies will accommodate the case of an :class:`.Engine`
+being shared among processes.  However, for the case of a transaction-active
+:class:`.Session` or :class:`.Connection` being shared, there's no automatic
+fix for this; an application needs to ensure a new child process only
+initiate new :class:`.Connection` objects and transactions, as well as ORM
+:class:`.Session` objects.  For a :class:`.Session` object, technically
+this is only needed if the session is currently transaction-bound, however
+the scope of a single :class:`.Session` is in any case intended to be
+kept within a single call stack in any case (e.g. not a global object, not
+shared between processes or threads).
diff --git a/doc/build/faq/sessions.rst b/doc/build/faq/sessions.rst
index e3aae00..2e4bdd4 100644
--- a/doc/build/faq/sessions.rst
+++ b/doc/build/faq/sessions.rst
@@ -417,6 +417,77 @@ The recipe `ExpireRelationshipOnFKChange <http://www.sqlalchemy.org/trac/wiki/Us
 in order to coordinate the setting of foreign key attributes with many-to-one
 relationships.
 
+.. _faq_walk_objects:
+
+How do I walk all objects that are related to a given object?
+-------------------------------------------------------------
+
+An object that has other objects related to it will correspond to the
+:func:`.relationship` constructs set up between mappers.  This code fragment will
+iterate all the objects, correcting for cycles as well::
+
+    from sqlalchemy import inspect
+
+
+    def walk(obj):
+        deque = [obj]
+
+        seen = set()
+
+        while deque:
+            obj = deque.pop(0)
+            if obj in seen:
+                continue
+            else:
+                seen.add(obj)
+                yield obj
+            insp = inspect(obj)
+            for relationship in insp.mapper.relationships:
+                related = getattr(obj, relationship.key)
+                if relationship.uselist:
+                    deque.extend(related)
+                elif related is not None:
+                    deque.append(related)
+
+The function can be demonstrated as follows::
+
+    Base = declarative_base()
+
+
+    class A(Base):
+        __tablename__ = 'a'
+        id = Column(Integer, primary_key=True)
+        bs = relationship("B", backref="a")
+
+
+    class B(Base):
+        __tablename__ = 'b'
+        id = Column(Integer, primary_key=True)
+        a_id = Column(ForeignKey('a.id'))
+        c_id = Column(ForeignKey('c.id'))
+        c = relationship("C", backref="bs")
+
+
+    class C(Base):
+        __tablename__ = 'c'
+        id = Column(Integer, primary_key=True)
+
+
+    a1 = A(bs=[B(), B(c=C())])
+
+
+    for obj in walk(a1):
+        print obj
+
+Output::
+
+    <__main__.A object at 0x10303b190>
+    <__main__.B object at 0x103025210>
+    <__main__.B object at 0x10303b0d0>
+    <__main__.C object at 0x103025490>
+
+
+
 Is there a way to automagically have only unique keywords (or other kinds of objects) without doing a query for the keyword and getting a reference to the row containing that keyword?
 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 
diff --git a/doc/build/orm/events.rst b/doc/build/orm/events.rst
index e9673be..470a938 100644
--- a/doc/build/orm/events.rst
+++ b/doc/build/orm/events.rst
@@ -5,12 +5,10 @@ ORM Events
 
 The ORM includes a wide variety of hooks available for subscription.
 
-.. versionadded:: 0.7
-    The event supersedes the previous system of "extension" classes.
-
-For an introduction to the event API, see :ref:`event_toplevel`.  Non-ORM events
-such as those regarding connections and low-level statement execution are described in
-:ref:`core_event_toplevel`.
+For an introduction to the most commonly used ORM events, see the section
+:ref:`session_events_toplevel`.   The event system in general is discussed
+at :ref:`event_toplevel`.  Non-ORM events such as those regarding connections
+and low-level statement execution are described in :ref:`core_event_toplevel`.
 
 Attribute Events
 ----------------
diff --git a/doc/build/orm/examples.rst b/doc/build/orm/examples.rst
index 4db7c00..25d2430 100644
--- a/doc/build/orm/examples.rst
+++ b/doc/build/orm/examples.rst
@@ -93,6 +93,8 @@ Versioning with a History Table
 
 .. automodule:: examples.versioned_history
 
+.. _examples_versioned_rows:
+
 Versioning using Temporal Rows
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
diff --git a/doc/build/orm/extensions/associationproxy.rst b/doc/build/orm/extensions/associationproxy.rst
index 6fc57e3..7e7b1f9 100644
--- a/doc/build/orm/extensions/associationproxy.rst
+++ b/doc/build/orm/extensions/associationproxy.rst
@@ -509,5 +509,6 @@ API Documentation
 .. autoclass:: AssociationProxy
    :members:
    :undoc-members:
+   :inherited-members:
 
 .. autodata:: ASSOCIATION_PROXY
diff --git a/doc/build/orm/inheritance.rst b/doc/build/orm/inheritance.rst
index 0713634..290d809 100644
--- a/doc/build/orm/inheritance.rst
+++ b/doc/build/orm/inheritance.rst
@@ -228,9 +228,9 @@ subclasses:
     entity = with_polymorphic(Employee, [Engineer, Manager])
 
     # join to all subclass tables
-    entity = query.with_polymorphic(Employee, '*')
+    entity = with_polymorphic(Employee, '*')
 
-    # use with Query
+    # use the 'entity' with a Query object
     session.query(entity).all()
 
 It also accepts a third argument ``selectable`` which replaces the automatic
@@ -249,7 +249,7 @@ should be used to load polymorphically::
                 employee.outerjoin(manager).outerjoin(engineer)
             )
 
-    # use with Query
+    # use the 'entity' with a Query object
     session.query(entity).all()
 
 Note that if you only need to load a single subtype, such as just the
diff --git a/doc/build/orm/relationship_persistence.rst b/doc/build/orm/relationship_persistence.rst
index 8af96cb..d4fca2c 100644
--- a/doc/build/orm/relationship_persistence.rst
+++ b/doc/build/orm/relationship_persistence.rst
@@ -172,56 +172,108 @@ Mutable Primary Keys / Update Cascades
 When the primary key of an entity changes, related items
 which reference the primary key must also be updated as
 well. For databases which enforce referential integrity,
-it's required to use the database's ON UPDATE CASCADE
+the best strategy is to use the database's ON UPDATE CASCADE
 functionality in order to propagate primary key changes
 to referenced foreign keys - the values cannot be out
-of sync for any moment.
-
-For databases that don't support this, such as SQLite and
-MySQL without their referential integrity options turned
-on, the :paramref:`~.relationship.passive_updates` flag can
-be set to ``False``, most preferably on a one-to-many or
-many-to-many :func:`.relationship`, which instructs
-SQLAlchemy to issue UPDATE statements individually for
-objects referenced in the collection, loading them into
-memory if not already locally present. The
-:paramref:`~.relationship.passive_updates` flag can also be ``False`` in
-conjunction with ON UPDATE CASCADE functionality,
-although in that case the unit of work will be issuing
-extra SELECT and UPDATE statements unnecessarily.
-
-A typical mutable primary key setup might look like::
+of sync for any moment unless the constraints are marked as "deferrable",
+that is, not enforced until the transaction completes.
+
+It is **highly recommended** that an application which seeks to employ
+natural primary keys with mutable values to use the ``ON UPDATE CASCADE``
+capabilities of the database.   An example mapping which
+illustrates this is::
 
     class User(Base):
         __tablename__ = 'user'
+        __table_args__ = {'mysql_engine': 'InnoDB'}
 
         username = Column(String(50), primary_key=True)
         fullname = Column(String(100))
 
-        # passive_updates=False *only* needed if the database
-        # does not implement ON UPDATE CASCADE
-        addresses = relationship("Address", passive_updates=False)
+        addresses = relationship("Address")
+
 
     class Address(Base):
         __tablename__ = 'address'
+        __table_args__ = {'mysql_engine': 'InnoDB'}
 
         email = Column(String(50), primary_key=True)
         username = Column(String(50),
                     ForeignKey('user.username', onupdate="cascade")
                 )
 
-:paramref:`~.relationship.passive_updates` is set to ``True`` by default,
-indicating that ON UPDATE CASCADE is expected to be in
-place in the usual case for foreign keys that expect
-to have a mutating parent key.
-
-A :paramref:`~.relationship.passive_updates` setting of False may be configured on any
-direction of relationship, i.e. one-to-many, many-to-one,
-and many-to-many, although it is much more effective when
-placed just on the one-to-many or many-to-many side.
-Configuring the :paramref:`~.relationship.passive_updates`
-to False only on the
-many-to-one side will have only a partial effect, as the
-unit of work searches only through the current identity
-map for objects that may be referencing the one with a
-mutating primary key, not throughout the database.
+Above, we illustrate ``onupdate="cascade"`` on the :class:`.ForeignKey`
+object, and we also illustrate the ``mysql_engine='InnoDB'`` setting
+which, on a MySQL backend, ensures that the ``InnoDB`` engine supporting
+referential integrity is used.  When using SQLite, referential integrity
+should be enabled, using the configuration described at
+:ref:`sqlite_foreign_keys`.
+
+Simulating limited ON UPDATE CASCADE without foreign key support
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In those cases when a database that does not support referential integrity
+is used, and natural primary keys with mutable values are in play,
+SQLAlchemy offers a feature in order to allow propagation of primary key
+values to already-referenced foreign keys to a **limited** extent,
+by emitting an UPDATE statement against foreign key columns that immediately
+reference a primary key column whose value has changed.
+The primary platforms without referential integrity features are
+MySQL when the ``MyISAM`` storage engine is used, and SQLite when the
+``PRAGMA foreign_keys=ON`` pragma is not used.  The Oracle database also
+has no support for ``ON UPDATE CASCADE``, but because it still enforces
+referential integrity, needs constraints to be marked as deferrable
+so that SQLAlchemy can emit UPDATE statements.
+
+The feature is enabled by setting the
+:paramref:`~.relationship.passive_updates` flag to ``False``,
+most preferably on a one-to-many or
+many-to-many :func:`.relationship`.  When "updates" are no longer
+"passive" this indicates that SQLAlchemy will
+issue UPDATE statements individually for
+objects referenced in the collection referred to by the parent object
+with a changing primary key value.  This also implies that collections
+will be fully loaded into memory if not already locally present.
+
+Our previous mapping using ``passive_updates=False`` looks like::
+
+    class User(Base):
+        __tablename__ = 'user'
+
+        username = Column(String(50), primary_key=True)
+        fullname = Column(String(100))
+
+        # passive_updates=False *only* needed if the database
+        # does not implement ON UPDATE CASCADE
+        addresses = relationship("Address", passive_updates=False)
+
+    class Address(Base):
+        __tablename__ = 'address'
+
+        email = Column(String(50), primary_key=True)
+        username = Column(String(50), ForeignKey('user.username'))
+
+Key limitations of ``passive_updates=False`` include:
+
... 9333 lines suppressed ...

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/python-modules/packages/sqlalchemy.git



More information about the Python-modules-commits mailing list