Chris Golledge | 21 Aug 02:12 2014


The Microsoft definition for the value is "A 32-bit bitmask ...".  The unixODBC driver manager implements this value as an int (SQLINTEGER).  The size of the int type is platform dependent.  On little endian systems, it works out OK for my driver to write only the first 32 bits (at least if the variable is initialized), but on a 64-bit, big endian system, like AIX (and I think Solaris), the app reading the value as a 64-bit integer when the driver has written the value into the high order bytes does not work.

There is no mention per se of this attribute at or, so, you might say that since it was originally a 32-bit mask, and there is no mention of it, it should remain a 32-bit mask.   But, you can get there by going around the barn. TXN_ISOLATION has been around since ODBC 1.0, at which time it was set with SQLSetConnectOption, and there is the change in the table that

    SQLULEN Value

and SQLSetConnectOption maps to SQLSetConnectAttr.

Is there a better explanation for why this attribute value becomes 64-bits on AIX?  
(Being pedantic, if this logic is correct, I'm thinking it should be declared as an SQLULEN  instead of an SQLINTEGER in the DM code.)

Chris Golledge
IBM Software Group, Lenexa KS
Tel: (913) 599 7250

"Ah, because I have learned something since last week."  - Gandhi
unixODBC-dev mailing list
unixODBC-dev <at>
Scott Zhong | 6 Jun 20:36 2014

setting SQL_ATTR_ODBC_CURSORS attribute to SQL_CUR_USE_ODBC fails to open the libodbccr library on AIX



    unixODBC 2.3.2 on AIX 6.1 after setting SQL_ATTR_ODBC_CURSORS attribute to SQL_CUR_USE_ODBC and then connecting to a data source (DB2 is used in this testcase) produces an error about unable to open cursor library.


Testcase snippet:




SQLDriverConnect (hdbc, NULL, connStrIn, SQL_NTS, connStrOut, BUFFER_LEN, &connStrOutLen, SQL_DRIVER_NOPROMPT);


/>uname -srv

AIX 1 6

/>oslevel -s


/>xlC -qversion

IBM XL C/C++ for AIX, V12.1 (5765-J02, 5725-C72)

Version: 12.01.0000.0008

/>xlC -I$ODBC/include testcase_setconnectattr.cpp -L$ODBC/lib -lodbc -lpthread -liconv

/>./a.out QE197UTF dbtest1 zebco5


Using specified DSN : QE197UTF

setting SQL_ATTR_ODBC_CURSORS to "1"


rc: -1

ERROR: 0:  01000 : [unixODBC][Driver Manager]Can't open cursor lib '/nfs/packages/mdx/aix/ppc32/databases/unixodbc/2.3.2/etc/' : file not found

Error in Step 1 -- SQLDriverConnect failed



copying "libodbccr.a" to "/nfs/packages/mdx/aix/ppc32/databases/unixodbc/2.3.2/etc/" does NOT fix the issue.

unixODBC-dev mailing list
unixODBC-dev <at>
Scott Zhong | 4 Jun 21:34 2014

Valgrind shows memory leak



    Valgrind  3.8.1 shows a memory leak in unixODBC 2.3.2 after setting SQL_ATTR_ODBC_CURSORS attribute to SQL_CUR_USE_ODBC and then connecting to a data source.


Testcase snippet:




SQLDriverConnect (hdbc, NULL, connStrIn, SQL_NTS, connStrOut, BUFFER_LEN, &connStrOutLen, SQL_DRIVER_NOPROMPT);


/>uname -srm

Linux 2.6.32-358.el6.x86_64 x86_64

/>cat /etc/redhat-release

Red Hat Enterprise Linux Server release 6.4 (Santiago)

/>g++ -g -I$ODBC/include testcase_unixodbc_leak.cpp -L$ODBC/lib -lodbc -lpthread

/>valgrind --leak-check=full --show-reachable=no --show-possibly-lost=no --track-origins=yes --num-callers=50 --gen-suppressions=all --xml=yes --xml-file=testcase.valgrind a.out


Valgrind leak entry:







    <text>5,056 (64 direct, 4,992 indirect) bytes in 1 blocks are definitely lost in loss record 167 of 170</text>






































    <sframe> <fun>malloc</fun> </sframe>

    <sframe> <obj>*</obj> </sframe>

    <sframe> <fun>__connect_part_two</fun> </sframe>

    <sframe> <fun>SQLDriverConnect</fun> </sframe>

    <sframe> <fun>main</fun> </sframe>


















Scott Z.

unixODBC-dev mailing list
unixODBC-dev <at>
David Brown | 9 May 22:50 2014

Problems building from SVN

I am getting the following error when trying to build 2.3.3pre fetched from svn - suggestions?
make -f Makefile.svn
fails with:
*** Retrieving configure tests needed by
libtoolize: AC_CONFIG_MACRO_DIR([libltdl/m4]) conflicts with ACLOCAL_AMFLAGS=-I m4.
make: *** [svn] Error 1 m4
I see that there are 2 m4 directories (./m4 and libltdl/m4) containing files of the same names, but with different contents - not sure if they are both used, or if one is obsolete. The files in libltdl/m4 are slightly larger, so if one is obsolete, I'm guessing it's ./m4.
David Brown
unixODBC-dev mailing list
unixODBC-dev <at>
FAU | 5 May 04:19 2014

SQLDataSourcesW() Buffer Lengths


It seems to be that SQLDataSourcesW() takes the buffer lengths arguments
as number of bytes which should be number of characters.  See for clarification.


unixODBC-dev mailing list
unixODBC-dev <at>

David Brown | 2 May 03:31 2014

ANSI to Unicode mapping issues (resend)

(Pardon me if this is a duplicate - I tried sending it a few days ago from a 
different address, but it didn't appear to go through)

We have been building and shipping an older ANSI version of our ODBC driver
(StarSQL) in Unix/Linux environments. We recently ported our current Unicode
ODBC driver (which has been running on Windows for several years) to Linux,
and ran into some issues that appear to be related to the unixODBC Driver
Manager mappings from ANSI entry point to the driver's Unicode entry points
when an ANSI application invokes ODBC calls to a Unicode driver.

Has anyone else encountered any of these issues?  Thoughts on a solution?

We are using the 2.3.2 release.

Here is a list of the issues encountered by the developer of our driver:

1)      The Driver Manager does not map calls from an ANSI application's
call to SQLGet/SetStmtOption to a Unicode driver's SQLGet/SetStmtAttrW entry
points. It only does the mapping to SQLGet/SetStmtAttr for ANSI drivers. We
were able to work around this by adding SQLGet/SetStmtOption function entry
points in our driver, but we shouldn't have to do that.

2)      SQLSetDescField does not alter the length supplied by the
application ("buffer_length") when the field supplied is a string which
value gets converted to Unicode before being passed to the Unicode Driver.
In this particular Unicode ODBC API, the buffer_length should be a
byte-count, not a character-count. The implementation of SQLGetDescField in
the unixODBC driver manager does deal with this better and divides
string_length by sizeof(SQLWCHAR) before returning to the application. That
works better, but is too simplistic for multi-byte ANSI data (e.g. UTF-8)

3)      Conversions between Unicode and ANSI are almost universally assuming
that one byte of ANSI data will produce two bytes of Unicode data (when
sizeof(SQLWCHAR) is 2). The code needs to check the length of the resulting
string (ANSI or Unicode) whenever such a conversion occurs and then use the
resulting length when passing it on to the driver or calling application.
Functions like  the ANSI versions of  SQLPrepare and SQLExecDirect can't
just perform an ansi-to-unicode translation and then pass the application
supplied length to the Unicode driver.

Looking at the unixODBC code, it seems clear that we were exposed to similar
with our old ANSI driver when called from a Unicode application.
Applications using parameter markers rather than string literals would be
less sensitive to the limitations of the current unixODBC driver manager
implementation since keywords and identifiers are less likely to contain
"problematic" characters , but it would seem important to address this none
the less.

Any suggestions would be appreciated.

David Brown

unixODBC-dev mailing list
unixODBC-dev <at>

Michael Jerris | 18 Apr 19:47 2014

patches fixing bugs in postgres 7.1 driver

Please see attached patches for memory leaks and a data truncation issue in the postgres driver.  Feedback
and inclusion appreciated.


Attachment (postgres7.1.fixes.diff): application/octet-stream, 5625 bytes

unixODBC-dev mailing list
unixODBC-dev <at>
Heikki Linnakangas | 24 Mar 20:37 2014

Handling malloc failure


Many of the functions in __handles.c don't handle a NULL result from 
malloc/calloc properly. There are if-checks for it, but then they go 
ahead and reference the NULL pointer anyway. See attached patch.

- Heikki
unixODBC-dev mailing list
unixODBC-dev <at>
Tony Gelsy Sampaio | 19 Mar 16:08 2014

Mantenha contato comigo através do LinkedIn.

De Tony Gelsy Sampaio
Graduado em Ciência da Computação (UFMT) Especialista em Redes e Computação Distribuída (IFMT)
Cuiabá e Região, Brasil

Eu gostaria de adicioná-lo à minha rede profissional no LinkedIn.
-Tony Gelsy

Você está recebendo convites de conexão por e-mail. Cancelar inscrição
© 2014, LinkedIn Corporation. 2029 Stierlin Ct. Mountain View, CA 94043, EUA
unixODBC-dev mailing list
unixODBC-dev <at>
Daniel Kasak | 12 Mar 07:42 2014

Teradata and 'SQL_ERROR or SQL_SUCCESS_WITH_INFO but no error reporting API found'

Hi all.

I have to connect to multiple databases via ODBC, so I *assume* I have to use unixODBC ( and not, for example, Teradata's driver manager). The problem is that Teradata's ODBC drivers appear to be pretty bad. If I execute a statement that works, everything is fine. But if I execute something that raises an error, I get:

SQL_ERROR or SQL_SUCCESS_WITH_INFO but no error reporting API found

I've googled for a while and found other people with the same issue, but no solution. What's going on here? Am I correct in assuming that Teradata simply haven't implemented an error reporting API?

unixODBC-dev mailing list
unixODBC-dev <at>
ZhengYabin | 1 Mar 02:29 2014

Issues when cross-compile the freetds using unixODBC

        I’m recently using unixODBC and freetds to build a development environment for accessing a remote SQL Server on an arm-linux platform.
Things I’ve already done: x86 unixODBC with “./configure --prefix=/usr/local/unixODBC-x86”
2. cross-compile build unixODBC with “./configure --prefix=/usr/local/unixODBC-arm --host=arm-linux”

Then I tried to build freetds with unixODBC x86 freetds with “./configure --prefix=/usr/local/freetds-x86 --with-unixodbc=/usr/local/unixODBC-x86”, the process is good.
4. but when I tried to build arm edition with
“./configure --prefix=/usr/local/freetds-arm --with-unixodbc=/usr/local/unixODBC-arm --host=arm-linux”
it always genarated an error

./configure: line 16574: /usr/local/unixODBC-arm/bin/odbc_config: connot execute binary file
./configure: line 16575: /usr/local/unixODBC-arm/bin/odbc_config: connot execute binary file
configure: error: sql.h not found

It seems the configure script tried to execute the arm-linux edition odbc_config, it will definitely failed.
How to fix the problem to complete the cross-compile build?

发自 Windows 邮件

unixODBC-dev mailing list
unixODBC-dev <at>