[Python-modules-commits] [pyodbc] 01/03: Import pyodbc_3.0.10.orig.tar.gz
Laurent Bigonville
bigon at moszumanska.debian.org
Sun Jul 10 12:12:24 UTC 2016
This is an automated email from the git hooks/post-receive script.
bigon pushed a commit to branch master
in repository pyodbc.
commit 203ffd15e1d32e64aac888e007e14188371337ea
Author: Laurent Bigonville <bigon at bigon.be>
Date: Sun Jul 10 14:00:35 2016 +0200
Import pyodbc_3.0.10.orig.tar.gz
---
PKG-INFO | 42 +++++------
README.rst | 171 --------------------------------------------
pyodbc.egg-info/PKG-INFO | 42 +++++------
pyodbc.egg-info/SOURCES.txt | 26 +++----
setup.cfg | 13 ++--
setup.py | 71 ++++++++++++------
src/cnxninfo.cpp | 39 ++++------
src/cnxninfo.h | 4 ++
src/connection.cpp | 19 +++--
src/connection.h | 17 +++--
src/cursor.cpp | 125 +++++++++++++++++++++++++-------
src/getdata.cpp | 67 ++++++++++++-----
src/params.cpp | 22 +++---
src/pyodbc.h | 6 ++
src/pyodbcmodule.cpp | 13 +++-
src/row.cpp | 131 ++++++++++++++++++++++++++++-----
src/row.h | 2 +-
src/sqlwchar.cpp | 62 ++++++++--------
src/sqlwchar.h | 34 ++++++---
src/wrapper.h | 32 ++++++++-
20 files changed, 534 insertions(+), 404 deletions(-)
diff --git a/PKG-INFO b/PKG-INFO
index 5199616..0f78a41 100644
--- a/PKG-INFO
+++ b/PKG-INFO
@@ -1,21 +1,21 @@
-Metadata-Version: 1.0
-Name: pyodbc
-Version: 3.0.6
-Summary: DB API Module for ODBC
-Home-page: http://code.google.com/p/pyodbc
-Author: Michael Kleehammer
-Author-email: michael at kleehammer.com
-License: MIT
-Download-URL: http://code.google.com/p/pyodbc/downloads/list
-Description: A Python DB API 2 module for ODBC. This project provides an up-to-date, convenient interface to ODBC using native data types like datetime and decimal.
-Platform: UNKNOWN
-Classifier: Development Status :: 5 - Production/Stable
-Classifier: Intended Audience :: Developers
-Classifier: Intended Audience :: System Administrators
-Classifier: License :: OSI Approved :: MIT License
-Classifier: Operating System :: Microsoft :: Windows
-Classifier: Operating System :: POSIX
-Classifier: Programming Language :: Python
-Classifier: Programming Language :: Python :: 2
-Classifier: Programming Language :: Python :: 3
-Classifier: Topic :: Database
+Metadata-Version: 1.1
+Name: pyodbc
+Version: 3.0.10
+Summary: DB API Module for ODBC
+Home-page: http://code.google.com/p/pyodbc
+Author: Michael Kleehammer
+Author-email: michael at kleehammer.com
+License: MIT
+Download-URL: http://code.google.com/p/pyodbc/downloads/list
+Description: A Python DB API 2 module for ODBC. This project provides an up-to-date, convenient interface to ODBC using native data types like datetime and decimal.
+Platform: UNKNOWN
+Classifier: Development Status :: 5 - Production/Stable
+Classifier: Intended Audience :: Developers
+Classifier: Intended Audience :: System Administrators
+Classifier: License :: OSI Approved :: MIT License
+Classifier: Operating System :: Microsoft :: Windows
+Classifier: Operating System :: POSIX
+Classifier: Programming Language :: Python
+Classifier: Programming Language :: Python :: 2
+Classifier: Programming Language :: Python :: 3
+Classifier: Topic :: Database
diff --git a/README.rst b/README.rst
deleted file mode 100644
index 1a6a1cc..0000000
--- a/README.rst
+++ /dev/null
@@ -1,171 +0,0 @@
-
-Overview
-========
-
-This project is a Python database module for ODBC that implements the Python DB API 2.0
-specification.
-
-:homepage: http://code.google.com/p/pyodbc
-:source: http://github.com/mkleehammer/pyodbc
-:source: http://code.google.com/p/pyodbc/source/list
-
-This module requires:
-
-* Python 2.4 or greater
-* ODBC 3.0 or greater
-
-On Windows, the easiest way to install is to use the Windows installers from:
-
- http://code.google.com/p/pyodbc/downloads/list
-
-Source can be obtained at
-
- http://github.com/mkleehammer/pyodbc/tree
-
- or
-
- http://code.google.com/p/pyodbc/source/list
-
-To build from source, either check the source out of version control or download a source
-extract and run::
-
- python setup.py build install
-
-Module Specific Behavior
-========================
-
-General
--------
-
-* The pyodbc.connect function accepts a single parameter: the ODBC connection string. This
- string is not read or modified by pyodbc, so consult the ODBC documentation or your ODBC
- driver's documentation for details. The general format is::
-
- cnxn = pyodbc.connect('DSN=mydsn;UID=userid;PWD=pwd')
-
-* Connection caching in the ODBC driver manager is automatically enabled.
-
-* Call cnxn.commit() since the DB API specification requires a rollback when a connection
- is closed that was not specifically committed.
-
-* When a connection is closed, all cursors created from the connection are closed.
-
-
-Data Types
-----------
-
-* Dates, times, and timestamps use the Python datetime module's date, time, and datetime
- classes. These classes can be passed directly as parameters and will be returned when
- querying date/time columns.
-
-* Binary data is passed and returned in Python buffer objects.
-
-* Decimal and numeric columns are passed and returned using the Python 2.4 decimal class.
-
-
-Convenient Additions
---------------------
-
-* Cursors are iterable and returns Row objects.
-
- ::
-
- cursor.execute("select a,b from tmp")
- for row in cursor:
- print row
-
-
-* The DB API specifies that results must be tuple-like, so columns are normally accessed by
- indexing into the sequence (e.g. row[0]) and pyodbc supports this. However, columns can also
- be accessed by name::
-
- cursor.execute("select album_id, photo_id from photos where user_id=1")
- row = cursor.fetchone()
- print row.album_id, row.photo_id
- print row[0], row[1] # same as above, but less readable
-
- This makes the code easier to maintain when modifying SQL, more readable, and allows rows to
- be used where a custom class might otherwise be used. All rows from a single execute share
- the same dictionary of column names, so using Row objects to hold a large result set may also
- use less memory than creating a object for each row.
-
- The SQL "as" keyword allows the name of a column in the result set to be specified. This is
- useful if a column name has spaces or if there is no name::
-
- cursor.execute("select count(*) as photo_count from photos where user_id < 100")
- row = cursor.fetchone()
- print row.photo_count
-
-
-* The DB API specification does not specify the return value of Cursor.execute. Previous
- versions of pyodbc (2.0.x) returned different values, but the 2.1 versions always return the
- Cursor itself.
-
- This allows for compact code such as::
-
- for row in cursor.execute("select album_id, photo_id from photos where user_id=1"):
- print row.album_id, row.photo_id
-
- row = cursor.execute("select * from tmp").fetchone()
- rows = cursor.execute("select * from tmp").fetchall()
-
- count = cursor.execute("update photos set processed=1 where user_id=1").rowcount
- count = cursor.execute("delete from photos where user_id=1").rowcount
-
-
-* Though SQL is very powerful, values sometimes need to be modified before they can be
- used. Rows allow their values to be replaced, which makes them even more convenient ad-hoc
- data structures.
-
- ::
-
- # Replace the 'start_date' datetime in each row with one that has a time zone.
- rows = cursor.fetchall()
- for row in rows:
- row.start_date = row.start_date.astimezone(tz)
-
- Note that columns cannot be added to rows; only values for existing columns can be modified.
-
-
-* As specified in the DB API, Cursor.execute accepts an optional sequence of parameters::
-
- cursor.execute("select a from tbl where b=? and c=?", (x, y))
-
- However, this seems complicated for something as simple as passing parameters, so pyodbc also
- accepts the parameters directly. Note in this example that x & y are not in a tuple::
-
- cursor.execute("select a from tbl where b=? and c=?", x, y)
-
-* The DB API specifies that connections require a manual commit and pyodbc complies with
- this. However, connections also support autocommit, using the autocommit keyword of the
- connection function or the autocommit attribute of the Connection object::
-
- cnxn = pyodbc.connect(cstring, autocommit=True)
-
- or
-
- ::
-
- cnxn.autocommit = True
- cnxn.autocommit = False
-
-
-Goals / Design
-==============
-
-* This module should not require any 3rd party modules other than ODBC.
-
-* Only built-in data types should be used where possible.
-
- a) Reduces the number of libraries to learn.
-
- b) Reduces the number of modules and libraries to install.
-
- c) Eventually a standard is usually introduced. For example, many previous database drivers
- used the mxDate classes. Now that Python 2.3 has introduced built-in date/time classes,
- using those modules is more complicated than using the built-ins.
-
-* It should adhere to the DB API specification, but be more "Pythonic" when convenient.
- The most common usages should be optimized for convenience and speed.
-
-* All ODBC functionality should (eventually) be exposed.
diff --git a/pyodbc.egg-info/PKG-INFO b/pyodbc.egg-info/PKG-INFO
index 5199616..0f78a41 100644
--- a/pyodbc.egg-info/PKG-INFO
+++ b/pyodbc.egg-info/PKG-INFO
@@ -1,21 +1,21 @@
-Metadata-Version: 1.0
-Name: pyodbc
-Version: 3.0.6
-Summary: DB API Module for ODBC
-Home-page: http://code.google.com/p/pyodbc
-Author: Michael Kleehammer
-Author-email: michael at kleehammer.com
-License: MIT
-Download-URL: http://code.google.com/p/pyodbc/downloads/list
-Description: A Python DB API 2 module for ODBC. This project provides an up-to-date, convenient interface to ODBC using native data types like datetime and decimal.
-Platform: UNKNOWN
-Classifier: Development Status :: 5 - Production/Stable
-Classifier: Intended Audience :: Developers
-Classifier: Intended Audience :: System Administrators
-Classifier: License :: OSI Approved :: MIT License
-Classifier: Operating System :: Microsoft :: Windows
-Classifier: Operating System :: POSIX
-Classifier: Programming Language :: Python
-Classifier: Programming Language :: Python :: 2
-Classifier: Programming Language :: Python :: 3
-Classifier: Topic :: Database
+Metadata-Version: 1.1
+Name: pyodbc
+Version: 3.0.10
+Summary: DB API Module for ODBC
+Home-page: http://code.google.com/p/pyodbc
+Author: Michael Kleehammer
+Author-email: michael at kleehammer.com
+License: MIT
+Download-URL: http://code.google.com/p/pyodbc/downloads/list
+Description: A Python DB API 2 module for ODBC. This project provides an up-to-date, convenient interface to ODBC using native data types like datetime and decimal.
+Platform: UNKNOWN
+Classifier: Development Status :: 5 - Production/Stable
+Classifier: Intended Audience :: Developers
+Classifier: Intended Audience :: System Administrators
+Classifier: License :: OSI Approved :: MIT License
+Classifier: Operating System :: Microsoft :: Windows
+Classifier: Operating System :: POSIX
+Classifier: Programming Language :: Python
+Classifier: Programming Language :: Python :: 2
+Classifier: Programming Language :: Python :: 3
+Classifier: Topic :: Database
diff --git a/pyodbc.egg-info/SOURCES.txt b/pyodbc.egg-info/SOURCES.txt
index d128b35..5858f17 100644
--- a/pyodbc.egg-info/SOURCES.txt
+++ b/pyodbc.egg-info/SOURCES.txt
@@ -1,19 +1,19 @@
LICENSE.txt
MANIFEST.in
-README.rst
+setup.cfg
setup.py
-C:/dev/pyodbc/src/buffer.cpp
-C:/dev/pyodbc/src/cnxninfo.cpp
-C:/dev/pyodbc/src/connection.cpp
-C:/dev/pyodbc/src/cursor.cpp
-C:/dev/pyodbc/src/errors.cpp
-C:/dev/pyodbc/src/getdata.cpp
-C:/dev/pyodbc/src/params.cpp
-C:/dev/pyodbc/src/pyodbccompat.cpp
-C:/dev/pyodbc/src/pyodbcdbg.cpp
-C:/dev/pyodbc/src/pyodbcmodule.cpp
-C:/dev/pyodbc/src/row.cpp
-C:/dev/pyodbc/src/sqlwchar.cpp
+/Users/mkleehammer/dev/pyodbc/src/buffer.cpp
+/Users/mkleehammer/dev/pyodbc/src/cnxninfo.cpp
+/Users/mkleehammer/dev/pyodbc/src/connection.cpp
+/Users/mkleehammer/dev/pyodbc/src/cursor.cpp
+/Users/mkleehammer/dev/pyodbc/src/errors.cpp
+/Users/mkleehammer/dev/pyodbc/src/getdata.cpp
+/Users/mkleehammer/dev/pyodbc/src/params.cpp
+/Users/mkleehammer/dev/pyodbc/src/pyodbccompat.cpp
+/Users/mkleehammer/dev/pyodbc/src/pyodbcdbg.cpp
+/Users/mkleehammer/dev/pyodbc/src/pyodbcmodule.cpp
+/Users/mkleehammer/dev/pyodbc/src/row.cpp
+/Users/mkleehammer/dev/pyodbc/src/sqlwchar.cpp
pyodbc.egg-info/PKG-INFO
pyodbc.egg-info/SOURCES.txt
pyodbc.egg-info/dependency_links.txt
diff --git a/setup.cfg b/setup.cfg
index b14b0bc..374087c 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -1,5 +1,8 @@
-[egg_info]
-tag_build =
-tag_date = 0
-tag_svn_revision = 0
-
+[build_ext]
+include_dirs = /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.8.sdk/usr/include
+
+[egg_info]
+tag_build =
+tag_date = 0
+tag_svn_revision = 0
+
diff --git a/setup.py b/setup.py
old mode 100644
new mode 100755
index 79023e5..5940f70
--- a/setup.py
+++ b/setup.py
@@ -12,6 +12,11 @@ except ImportError:
from distutils.extension import Extension
from distutils.errors import *
+if sys.hexversion >= 0x03000000:
+ from configparser import ConfigParser
+else:
+ from ConfigParser import ConfigParser
+
OFFICIAL_BUILD = 9999
def _print(s):
@@ -110,8 +115,12 @@ def main():
def get_compiler_settings(version_str):
- settings = { 'libraries': [],
- 'define_macros' : [ ('PYODBC_VERSION', version_str) ] }
+ settings = {
+ 'extra_compile_args' : [],
+ 'libraries': [],
+ 'include_dirs': [],
+ 'define_macros' : [ ('PYODBC_VERSION', version_str) ]
+ }
# This isn't the best or right way to do this, but I don't see how someone is supposed to sanely subclass the build
# command.
@@ -122,36 +131,58 @@ def get_compiler_settings(version_str):
except ValueError:
pass
+ from array import array
+ UNICODE_WIDTH = array('u').itemsize
+ settings['define_macros'].append(('PYODBC_UNICODE_WIDTH', str(UNICODE_WIDTH)))
+
if os.name == 'nt':
- settings['extra_compile_args'] = ['/Wall',
- '/wd4668',
- '/wd4820',
- '/wd4711', # function selected for automatic inline expansion
- '/wd4100', # unreferenced formal parameter
- '/wd4127', # "conditional expression is constant" testing compilation constants
- '/wd4191', # casts to PYCFunction which doesn't have the keywords parameter
- ]
- settings['libraries'].append('odbc32')
- settings['libraries'].append('advapi32')
+ settings['extra_compile_args'].extend([
+ '/Wall',
+ '/wd4668',
+ '/wd4820',
+ '/wd4711', # function selected for automatic inline expansion
+ '/wd4100', # unreferenced formal parameter
+ '/wd4127', # "conditional expression is constant" testing compilation constants
+ '/wd4191', # casts to PYCFunction which doesn't have the keywords parameter
+ ])
if '--debug' in sys.argv:
sys.argv.remove('--debug')
settings['extra_compile_args'].extend('/Od /Ge /GS /GZ /RTC1 /Wp64 /Yd'.split())
+ settings['libraries'].append('odbc32')
+ settings['libraries'].append('advapi32')
+
elif os.environ.get("OS", '').lower().startswith('windows'):
# Windows Cygwin (posix on windows)
# OS name not windows, but still on Windows
settings['libraries'].append('odbc32')
elif sys.platform == 'darwin':
- # OS/X now ships with iODBC.
- settings['libraries'].append('iodbc')
+ # The latest versions of OS X no longer ship with iodbc. Assume
+ # unixODBC for now.
+ settings['libraries'].append('odbc')
+
+ # Python functions take a lot of 'char *' that really should be const. gcc complains about this *a lot*
+ settings['extra_compile_args'].extend([
+ '-Wno-write-strings',
+ '-Wno-deprecated-declarations'
+ ])
+
+ # Apple has decided they won't maintain the iODBC system in OS/X and has added deprecation warnings in 10.8.
+ # For now target 10.7 to eliminate the warnings.
+ settings['define_macros'].append( ('MAC_OS_X_VERSION_10_7',) )
else:
# Other posix-like: Linux, Solaris, etc.
# Python functions take a lot of 'char *' that really should be const. gcc complains about this *a lot*
- settings['extra_compile_args'] = ['-Wno-write-strings']
+ settings['extra_compile_args'].append('-Wno-write-strings')
+
+ if UNICODE_WIDTH == 4:
+ # This makes UnixODBC use UCS-4 instead of UCS-2, which works better with sizeof(wchar_t)==4.
+ # Thanks to Marc-Antoine Parent
+ settings['define_macros'].append(('SQL_WCHART_CONVERT', '1'))
# What is the proper way to detect iODBC, MyODBC, unixODBC, etc.?
settings['libraries'].append('odbc')
@@ -231,7 +262,7 @@ def get_version():
def _get_version_pkginfo():
filename = join(dirname(abspath(__file__)), 'PKG-INFO')
if exists(filename):
- re_ver = re.compile(r'^Version: \s+ (\d+)\.(\d+)\.(\d+) (?: -beta(\d+))?', re.VERBOSE)
+ re_ver = re.compile(r'^Version: \s+ (\d+)\.(\d+)\.(\d+) (?: b(\d+))?', re.VERBOSE)
for line in open(filename):
match = re_ver.search(line)
if match:
@@ -259,12 +290,12 @@ def _get_version_git():
if numbers[-1] != OFFICIAL_BUILD:
# This is a beta of the next micro release, so increment the micro number to reflect this.
numbers[-2] += 1
- name = '%s.%s.%s-beta%02d' % tuple(numbers)
+ name = '%s.%s.%sb%d' % tuple(numbers)
- n, result = getoutput('git branch')
- branch = re.search(r'\* (\w+)', result).group(1)
+ n, result = getoutput('git branch --color=never')
+ branch = re.search(r'\* (\S+)', result).group(1)
if branch != 'master' and not re.match('^v\d+$', branch):
- name = branch + '-' + name
+ name = name + '+' + branch.replace('-', '')
return name, numbers
diff --git a/src/cnxninfo.cpp b/src/cnxninfo.cpp
index 659dbbf..b336e82 100644
--- a/src/cnxninfo.cpp
+++ b/src/cnxninfo.cpp
@@ -51,17 +51,17 @@ static PyObject* GetHash(PyObject* p)
Object hash(PyObject_CallMethod(hashlib, "new", "s", "sha1"));
if (!hash.IsValid())
return 0;
-
+
PyObject_CallMethodObjArgs(hash, update, p, 0);
return PyObject_CallMethod(hash, "hexdigest", 0);
}
-
+
if (sha)
{
Object hash(PyObject_CallMethod(sha, "new", 0));
if (!hash.IsValid())
return 0;
-
+
PyObject_CallMethodObjArgs(hash, update, p, 0);
return PyObject_CallMethod(hash, "hexdigest", 0);
}
@@ -85,6 +85,7 @@ static PyObject* CnxnInfo_New(Connection* cnxn)
p->odbc_minor = 50;
p->supports_describeparam = false;
p->datetime_precision = 19; // default: "yyyy-mm-dd hh:mm:ss"
+ p->need_long_data_len = false;
// WARNING: The GIL lock is released for the *entire* function here. Do not touch any objects, call Python APIs,
// etc. We are simply making ODBC calls and setting atomic values (ints & chars). Also, make sure the lock gets
@@ -108,11 +109,11 @@ static PyObject* CnxnInfo_New(Connection* cnxn)
}
char szYN[2];
- ret = SQLGetInfo(cnxn->hdbc, SQL_DESCRIBE_PARAMETER, szYN, _countof(szYN), &cch);
- if (SQL_SUCCEEDED(ret))
- {
+ if (SQL_SUCCEEDED(SQLGetInfo(cnxn->hdbc, SQL_DESCRIBE_PARAMETER, szYN, _countof(szYN), &cch)))
p->supports_describeparam = szYN[0] == 'Y';
- }
+
+ if (SQL_SUCCEEDED(SQLGetInfo(cnxn->hdbc, SQL_NEED_LONG_DATA_LEN, szYN, _countof(szYN), &cch)))
+ p->need_long_data_len = (szYN[0] == 'Y');
// These defaults are tiny, but are necessary for Access.
p->varchar_maxlength = 255;
@@ -124,42 +125,28 @@ static PyObject* CnxnInfo_New(Connection* cnxn)
{
SQLINTEGER columnsize;
if (SQL_SUCCEEDED(SQLGetTypeInfo(hstmt, SQL_TYPE_TIMESTAMP)) && SQL_SUCCEEDED(SQLFetch(hstmt)))
- {
if (SQL_SUCCEEDED(SQLGetData(hstmt, 3, SQL_INTEGER, &columnsize, sizeof(columnsize), 0)))
p->datetime_precision = (int)columnsize;
- SQLFreeStmt(hstmt, SQL_CLOSE);
- }
-
if (SQL_SUCCEEDED(SQLGetTypeInfo(hstmt, SQL_VARCHAR)) && SQL_SUCCEEDED(SQLFetch(hstmt)))
- {
if (SQL_SUCCEEDED(SQLGetData(hstmt, 3, SQL_INTEGER, &columnsize, sizeof(columnsize), 0)))
p->varchar_maxlength = (int)columnsize;
-
- SQLFreeStmt(hstmt, SQL_CLOSE);
- }
-
+
if (SQL_SUCCEEDED(SQLGetTypeInfo(hstmt, SQL_WVARCHAR)) && SQL_SUCCEEDED(SQLFetch(hstmt)))
- {
if (SQL_SUCCEEDED(SQLGetData(hstmt, 3, SQL_INTEGER, &columnsize, sizeof(columnsize), 0)))
p->wvarchar_maxlength = (int)columnsize;
-
- SQLFreeStmt(hstmt, SQL_CLOSE);
- }
-
+
if (SQL_SUCCEEDED(SQLGetTypeInfo(hstmt, SQL_BINARY)) && SQL_SUCCEEDED(SQLFetch(hstmt)))
- {
if (SQL_SUCCEEDED(SQLGetData(hstmt, 3, SQL_INTEGER, &columnsize, sizeof(columnsize), 0)))
p->binary_maxlength = (int)columnsize;
-
- SQLFreeStmt(hstmt, SQL_CLOSE);
- }
+
+ SQLFreeStmt(hstmt, SQL_CLOSE);
}
Py_END_ALLOW_THREADS
// WARNING: Released the lock now.
-
+
return info.Detach();
}
diff --git a/src/cnxninfo.h b/src/cnxninfo.h
index d604513..c5e78c0 100644
--- a/src/cnxninfo.h
+++ b/src/cnxninfo.h
@@ -27,6 +27,10 @@ struct CnxnInfo
bool supports_describeparam;
int datetime_precision;
+ // Do we need to use SQL_LEN_DATA_AT_EXEC? Some drivers (e.g. FreeTDS 0.91) have problems with long values, so
+ // we'll use SQL_DATA_AT_EXEC when possible. If this is true, however, we'll need to pass the length.
+ bool need_long_data_len;
+
// These are from SQLGetTypeInfo.column_size, so the char ones are in characters, not bytes.
int varchar_maxlength;
int wvarchar_maxlength;
diff --git a/src/connection.cpp b/src/connection.cpp
index 8dc091d..b1e18f9 100644
--- a/src/connection.cpp
+++ b/src/connection.cpp
@@ -82,7 +82,7 @@ static bool Connect(PyObject* pConnectString, HDBC hdbc, bool fAnsi, long timeou
{
SQLWChar connectString(pConnectString);
Py_BEGIN_ALLOW_THREADS
- ret = SQLDriverConnectW(hdbc, 0, connectString, (SQLSMALLINT)connectString.size(), 0, 0, 0, SQL_DRIVER_NOPROMPT);
+ ret = SQLDriverConnectW(hdbc, 0, connectString.get(), (SQLSMALLINT)connectString.size(), 0, 0, 0, SQL_DRIVER_NOPROMPT);
Py_END_ALLOW_THREADS
if (SQL_SUCCEEDED(ret))
return true;
@@ -198,7 +198,9 @@ PyObject* Connection_New(PyObject* pConnectString, bool fAutoCommit, bool fAnsi,
cnxn->nAutoCommit = fAutoCommit ? SQL_AUTOCOMMIT_ON : SQL_AUTOCOMMIT_OFF;
cnxn->searchescape = 0;
cnxn->timeout = 0;
+#if PY_MAJOR_VERSION < 3
cnxn->unicode_results = fUnicodeResults;
+#endif
cnxn->conv_count = 0;
cnxn->conv_types = 0;
cnxn->conv_funcs = 0;
@@ -263,6 +265,7 @@ PyObject* Connection_New(PyObject* pConnectString, bool fAutoCommit, bool fAnsi,
cnxn->varchar_maxlength = p->varchar_maxlength;
cnxn->wvarchar_maxlength = p->wvarchar_maxlength;
cnxn->binary_maxlength = p->binary_maxlength;
+ cnxn->need_long_data_len = p->need_long_data_len;
return reinterpret_cast<PyObject*>(cnxn);
}
@@ -782,7 +785,7 @@ static int Connection_settimeout(PyObject* self, PyObject* value, void* closure)
PyErr_SetString(PyExc_TypeError, "Cannot delete the timeout attribute.");
return -1;
}
- intptr_t timeout = PyInt_AsLong(value);
+ long timeout = PyInt_AsLong(value);
if (timeout == -1 && PyErr_Occurred())
return -1;
if (timeout < 0)
@@ -902,7 +905,7 @@ static PyObject* Connection_enter(PyObject* self, PyObject* args)
return self;
}
-static char exit_doc[] = "__exit__(*excinfo) -> None. Closes the connection.";
+static char exit_doc[] = "__exit__(*excinfo) -> None. Commits the connection if necessary.";
static PyObject* Connection_exit(PyObject* self, PyObject* args)
{
Connection* cnxn = (Connection*)self;
@@ -911,8 +914,16 @@ static PyObject* Connection_exit(PyObject* self, PyObject* args)
I(PyTuple_Check(args));
if (cnxn->nAutoCommit == SQL_AUTOCOMMIT_OFF && PyTuple_GetItem(args, 0) == Py_None)
- SQLEndTran(SQL_HANDLE_DBC, cnxn->hdbc, SQL_COMMIT);
+ {
+ SQLRETURN ret;
+ Py_BEGIN_ALLOW_THREADS
+ ret = SQLEndTran(SQL_HANDLE_DBC, cnxn->hdbc, SQL_COMMIT);
+ Py_END_ALLOW_THREADS
+ if (!SQL_SUCCEEDED(ret))
+ return RaiseErrorFromHandle("SQLEndTran(SQL_COMMIT)", cnxn->hdbc, SQL_NULL_HANDLE);
+ }
+
Py_RETURN_NONE;
}
diff --git a/src/connection.h b/src/connection.h
index 69f287e..552db5f 100644
--- a/src/connection.h
+++ b/src/connection.h
@@ -3,7 +3,7 @@
// documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
// permit persons to whom the Software is furnished to do so.
-//
+//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
// WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
// OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
@@ -21,7 +21,7 @@ struct Connection
PyObject_HEAD
// Set to SQL_NULL_HANDLE when the connection is closed.
- HDBC hdbc;
+ HDBC hdbc;
// Will be SQL_AUTOCOMMIT_ON or SQL_AUTOCOMMIT_OFF.
uintptr_t nAutoCommit;
@@ -40,17 +40,26 @@ struct Connection
// The column size of datetime columns, obtained from SQLGetInfo(), used to determine the datetime precision.
int datetime_precision;
- // If true, then the strings in the rows are returned as unicode objects.
+#if PY_MAJOR_VERSION < 3
bool unicode_results;
+ // If true, ANSI columns are returned as Unicode.
+#endif
// The connection timeout in seconds.
- intptr_t timeout;
+ long timeout;
+
+ PyObject* unicode_encoding;
+ // The optional Unicode encoding of the database. Unicode strings are
+ // encoded when sent and decoded when received.
+ //
+ // If not provided, UCS-2 is used.
// These are copied from cnxn info for performance and convenience.
int varchar_maxlength;
int wvarchar_maxlength;
int binary_maxlength;
+ bool need_long_data_len;
// Output conversions. Maps from SQL type in conv_types to the converter function in conv_funcs.
//
diff --git a/src/cursor.cpp b/src/cursor.cpp
index 1f32e0f..11db87a 100644
--- a/src/cursor.cpp
+++ b/src/cursor.cpp
@@ -25,6 +25,7 @@
#include "dbspecific.h"
#include "sqlwchar.h"
#include <datetime.h>
+#include "wrapper.h"
enum
{
@@ -125,7 +126,7 @@ inline bool IsNumericType(SQLSMALLINT sqltype)
}
-static PyObject* PythonTypeFromSqlType(Cursor* cur, const SQLCHAR* name, SQLSMALLINT type, bool unicode_results)
+static PyObject* PythonTypeFromSqlType(Cursor* cur, const SQLCHAR* name, SQLSMALLINT type)
{
// Returns a type object ('int', 'str', etc.) for the given ODBC C type. This is used to populate
// Cursor.description with the type of Python object that will be returned for each column.
@@ -151,10 +152,14 @@ static PyObject* PythonTypeFromSqlType(Cursor* cur, const SQLCHAR* name, SQLSMAL
case SQL_LONGVARCHAR:
case SQL_GUID:
case SQL_SS_XML:
- if (unicode_results)
+#if PY_MAJOR_VERSION < 3
+ if (cur->cnxn->unicode_results)
pytype = (PyObject*)&PyUnicode_Type;
else
pytype = (PyObject*)&PyString_Type;
+#else
+ pytype = (PyObject*)&PyString_Type;
+#endif
break;
case SQL_DECIMAL:
@@ -277,7 +282,7 @@ static bool create_name_map(Cursor* cur, SQLSMALLINT field_count, bool lower)
if (lower)
_strlwr((char*)name);
- type = PythonTypeFromSqlType(cur, name, nDataType, cur->cnxn->unicode_results);
+ type = PythonTypeFromSqlType(cur, name, nDataType);
if (!type)
goto done;
@@ -686,7 +691,7 @@ static PyObject* execute(Cursor* cur, PyObject* pSql, PyObject* params, bool ski
if (!query)
return 0;
Py_BEGIN_ALLOW_THREADS
- ret = SQLExecDirectW(cur->hstmt, query, SQL_NTS);
+ ret = SQLExecDirectW(cur->hstmt, query.get(), SQL_NTS);
Py_END_ALLOW_THREADS
}
}
@@ -742,7 +747,7 @@ static PyObject* execute(Cursor* cur, PyObject* pSql, PyObject* params, bool ski
{
SQLLEN remaining = min(cur->cnxn->varchar_maxlength, length - offset);
Py_BEGIN_ALLOW_THREADS
- ret = SQLPutData(cur->hstmt, (SQLPOINTER)wchar[offset], (SQLLEN)(remaining * sizeof(SQLWCHAR)));
+ ret = SQLPutData(cur->hstmt, (SQLPOINTER)wchar[offset], (SQLLEN)(remaining * sizeof(ODBCCHAR)));
Py_END_ALLOW_THREADS
if (!SQL_SUCCEEDED(ret))
return RaiseErrorFromHandle("SQLPutData", cur->cnxn->hdbc, cur->hstmt);
@@ -959,32 +964,66 @@ static PyObject* Cursor_executemany(PyObject* self, PyObject* args)
return 0;
}
- if (!IsSequence(param_seq))
+ if (IsSequence(param_seq))
{
- PyErr_SetString(ProgrammingError, "The second parameter to executemany must be a sequence.");
- return 0;
- }
+ Py_ssize_t c = PySequence_Size(param_seq);
- Py_ssize_t c = PySequence_Size(param_seq);
+ if (c == 0)
+ {
+ PyErr_SetString(ProgrammingError, "The second parameter to executemany must not be empty.");
+ return 0;
+ }
- if (c == 0)
- {
- PyErr_SetString(ProgrammingError, "The second parameter to executemany must not be empty.");
- return 0;
+ for (Py_ssize_t i = 0; i < c; i++)
+ {
+ PyObject* params = PySequence_GetItem(param_seq, i);
+ PyObject* result = execute(cursor, pSql, params, false);
+ bool success = result != 0;
+ Py_XDECREF(result);
+ Py_DECREF(params);
+ if (!success)
+ {
+ cursor->rowcount = -1;
+ return 0;
+ }
+ }
}
-
- for (Py_ssize_t i = 0; i < c; i++)
+ else if (PyGen_Check(param_seq) || PyIter_Check(param_seq))
{
- PyObject* params = PySequence_GetItem(param_seq, i);
- PyObject* result = execute(cursor, pSql, params, false);
- bool success = result != 0;
- Py_XDECREF(result);
- Py_DECREF(params);
- if (!success)
+ Object iter;
+
+ if (PyGen_Check(param_seq))
{
- cursor->rowcount = -1;
- return 0;
+ iter = PyObject_GetIter(param_seq);
+ }
+ else
+ {
+ iter = param_seq;
+ Py_INCREF(param_seq);
}
+
+ Object params;
+
+ while (params.Attach(PyIter_Next(iter)))
+ {
+ PyObject* result = execute(cursor, pSql, params, false);
+ bool success = result != 0;
+ Py_XDECREF(result);
+
+ if (!success)
+ {
+ cursor->rowcount = -1;
+ return 0;
+ }
+ }
+
+ if (PyErr_Occurred())
+ return 0;
+ }
+ else
+ {
+ PyErr_SetString(ProgrammingError, "The second parameter to executemany must be a sequence, iterator, or generator.");
+ return 0;
}
cursor->rowcount = -1;
@@ -1040,7 +1079,7 @@ static PyObject* Cursor_fetch(Cursor* cur)
apValues[i] = value;
}
- return (PyObject*)Row_New(cur->description, cur->map_name_to_index, field_count, apValues);
+ return (PyObject*)Row_InternalNew(cur->description, cur->map_name_to_index, field_count, apValues);
}
@@ -2054,6 +2093,40 @@ static char fetchmany_doc[] =
"A ProgrammingError exception is raised if the previous call to execute() did\n" \
"not produce any result set or no call was issued yet.";
+
+static char enter_doc[] = "__enter__() -> self.";
+static PyObject* Cursor_enter(PyObject* self, PyObject* args)
+{
+ UNUSED(args);
+ Py_INCREF(self);
+ return self;
+}
+
+static char exit_doc[] = "__exit__(*excinfo) -> None. Commits the connection if necessary..";
+static PyObject* Cursor_exit(PyObject* self, PyObject* args)
+{
+ Cursor* cursor = Cursor_Validate(self, CURSOR_REQUIRE_OPEN | CURSOR_RAISE_ERROR);
+ if (!cursor)
+ return 0;
+
+ // If an error has occurred, `args` will be a tuple of 3 values. Otherwise it will be a tuple of 3 `None`s.
+ I(PyTuple_Check(args));
+
+ if (cursor->cnxn->nAutoCommit == SQL_AUTOCOMMIT_OFF && PyTuple_GetItem(args, 0) == Py_None)
+ {
+ SQLRETURN ret;
+ Py_BEGIN_ALLOW_THREADS
+ ret = SQLEndTran(SQL_HANDLE_DBC, cursor->cnxn->hdbc, SQL_COMMIT);
+ Py_END_ALLOW_THREADS
+
+ if (!SQL_SUCCEEDED(ret))
+ return RaiseErrorFromHandle("SQLEndTran(SQL_COMMIT)", cursor->cnxn->hdbc, cursor->hstmt);
+ }
+
+ Py_RETURN_NONE;
+}
+
+
static PyMethodDef Cursor_methods[] =
{
{ "close", (PyCFunction)Cursor_close, METH_NOARGS, close_doc },
@@ -2078,6 +2151,8 @@ static PyMethodDef Cursor_methods[] =
{ "skip", (PyCFunction)Cursor_skip, METH_VARARGS, skip_doc },
{ "commit", (PyCFunction)Cursor_commit, METH_NOARGS, commit_doc },
{ "rollback", (PyCFunction)Cursor_rollback, METH_NOARGS, rollback_doc },
+ { "__enter__", Cursor_enter, METH_NOARGS, enter_doc },
+ { "__exit__", Cursor_exit, METH_VARARGS, exit_doc },
{ 0, 0, 0, 0 }
};
diff --git a/src/getdata.cpp b/src/getdata.cpp
index 8dc825e..53e2115 100644
--- a/src/getdata.cpp
+++ b/src/getdata.cpp
@@ -27,13 +27,13 @@ class DataBuffer
//
// 1) Binary, which is a simple array of 8-bit bytes.
// 2) ANSI text, which is an array of chars with a NULL terminator.
- // 3) Unicode text, which is an array of SQLWCHARs with a NULL terminator.
+ // 3) Unicode text, which is an array of ODBCCHARs with a NULL terminator.
//
- // When dealing with Unicode, there are two widths we have to be aware of: (1) SQLWCHAR and (2) Py_UNICODE. If
+ // When dealing with Unicode, there are two widths we have to be aware of: (1) ODBCCHAR and (2) Py_UNICODE. If
// these are the same we can use a PyUnicode object so we don't have to allocate our own buffer and then the
// Unicode object. If they are not the same (e.g. OS/X where wchar_t-->4 Py_UNICODE-->2) then we need to maintain
// our own buffer and pass it to the PyUnicode object later. Many Linux distros are now using UCS4, so Py_UNICODE
- // will be larger than SQLWCHAR.
+ // will be larger than ODBCCHAR.
//
// To reduce heap fragmentation, we perform the initial read into an array on the stack since we don't know the
// length of the data. If the data doesn't fit, this class then allocates new memory. If the first read gives us
@@ -61,7 +61,7 @@ public:
this->dataType = dataType;
- element_size = (int)((dataType == SQL_C_WCHAR) ? sizeof(SQLWCHAR) : sizeof(char));
+ element_size = (int)((dataType == SQL_C_WCHAR) ? ODBCCHAR_SIZE : sizeof(char));
null_size = (dataType == SQL_C_BINARY) ? 0 : element_size;
buffer = stackBuffer;
@@ -138,7 +138,7 @@ public:
buffer = bufferOwner ? PyBytes_AS_STRING(bufferOwner) : 0;
#endif
}
- else if (sizeof(SQLWCHAR) == Py_UNICODE_SIZE)
+ else if (ODBCCHAR_SIZE == Py_UNICODE_SIZE)
{
// Allocate directly into a Unicode object.
bufferOwner = PyUnicode_FromUnicode(0, newSize / element_size);
@@ -146,7 +146,7 @@ public:
}
else
{
- // We're Unicode, but SQLWCHAR and Py_UNICODE don't match, so maintain our own SQLWCHAR buffer.
+ // We're Unicode, but ODBCCHAR and Py_UNICODE don't match, so maintain our own ODBCCHAR buffer.
bufferOwner = 0;
buffer = (char*)pyodbc_malloc((size_t)newSize);
}
@@ -215,9 +215,6 @@ public:
#endif
}
- if (sizeof(SQLWCHAR) == Py_UNICODE_SIZE)
- return PyUnicode_FromUnicode((const Py_UNICODE*)buffer, bytesUsed / element_size);
-
return PyUnicode_FromSQLWCHAR((const SQLWCHAR*)buffer, bytesUsed / element_size);
}
@@ -257,7 +254,7 @@ public:
I(bufferOwner == 0);
PyObject* result = PyUnicode_FromSQLWCHAR((const SQLWCHAR*)buffer, bytesUsed / element_size);
if (result == 0)
- return false;
+ return 0;
pyodbc_free(buffer);
buffer = 0;
return result;
@@ -327,8 +324,8 @@ static PyObject* GetDataString(Cursor* cur, Py_ssize_t iCol)
break;
}
- char tempBuffer[1024];
- DataBuffer buffer(nTargetType, tempBuffer, sizeof(tempBuffer));
+ char tempBuffer[1026]; // Pad with 2 bytes for driver bugs
+ DataBuffer buffer(nTargetType, tempBuffer, sizeof(tempBuffer)-2);
... 723 lines suppressed ...
--
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/python-modules/packages/pyodbc.git
More information about the Python-modules-commits
mailing list