Issue 265: cursor.fetchall causes memory leak when a * is used in MySQL string.
Status:  Complete
Owner: ----
Closed:  Sep 2012
Reported by cyborg10...@gmail.com, May 23, 2012
What steps will reproduce the problem?
1. cursor.execute("SELECT * FROM table WHERE 1")
2. rows = cursor.fetchall()

What is the expected output? What do you see instead?
  Would expect for rows to be populated, instead python hangs and the memory builds until swaps.

What version of the product are you using? On what operating system?
  Pyodbc 2.1.11, MySQL ODBC 3.51 Driver, CentOS release 6.1 (Final).

Please provide any additional information below.
  Works ok on RHEL5 with same setup so may be a missing / incompatible lib with CentOS.  Running SELECT with all of the table headers (rather that *) works ok.

May 23, 2012
Project Member #1 mkleehammer
Weird, because pyodbc simply passes the string on. It shouldn't be different unless the MySQL driver is doing something unusual.  Are you sure the drivers are the same on the two boxes?

Also, can you test this with pyodbc 3.0.x?  It works on both Python 2 and Python 3.  There were some large changes and the issue might already be fixed.
Status: Hold
May 24, 2012
#2 cyborg10...@gmail.com
I'm pretty sure the same as I've reinstalled them.  With the latest, I get this

Error: ('07006', '[07006] [MySQL][ODBC 3.51 Driver][mysqld-5.0.77]Restricted data type attribute violation (508) (SQLGetData)')

With the same comand (a simple SELECT that should return 50 odd rows).
Jun 12, 2012
#3 dkleeham...@gmail.com
MySQL has as bug when using the 3.51 driver and using a Decimal column type.  Is the RHEL5 table using decimal types?

MySQL Bug: http://bugs.mysql.com/bug.php?id=3028

They have fixed this bug in the 5.1 driver.  The original quick fix during this bug fix was to change the column type from a Decimal to a Double but this is NOT a good fix, however, it may be a decent test if you are able to run this.

Jun 18, 2012
#4 cyborg10...@gmail.com
Thanks.  They're both looking at the same table, but I think RHEL5 install has an older odbc build.  I will try 5.1 and let you know.
Jun 18, 2012
#5 cyborg10...@gmail.com
The only think I can add for now is the same DSN with the same command works when using isql which is odd.
Jun 18, 2012
#6 cyborg10...@gmail.com
And can confirm that the issue is still there with 5.1.11.
Jun 19, 2012
#7 dkleeham...@gmail.com
Good morning.

After reading through the aforementioned bug report, I found that the bug still resided in MyODBC versions < 5.1.2.

If you look at the bug report for:
[17 Oct 2007 6:45] Georg Richter

you will see that he was still having the same issue as you.  Below his post, is another:
[3 Jan 2008 5:02] Jess Balint
Fix committed into svn r983, will be released in Connector/ODBC 5.1.2.

Can you test this with 5.1.2?

Last question, is one of your boxes a 32bit and the other a 64bit?  I'm wondering if MyODBC is having an issue with decimals under one or the other.
Jun 19, 2012
#8 cyborg10...@gmail.com
Will retry with 5.1.2, but still doesn't explain why it's working with isql but not pyodbc?  Both are 64-bit, but one is Intel, the other AMD (RHEL).
Jun 19, 2012
#9 dkleeham...@gmail.com
If you are still having issues, can you post up a sql trace and I'll take a look at it?

I have seen queries work on unixodbc that haven't worked on isql, but not the other way around.  Since there is a difference, it could be possible that this is one of those times.

As mentioned by Michael Kleehammer, pyodbc is an abstract interface and only passes queries and data to the driver and to the app.  Things that have mattered with pyodbc on Linux are usually python & pyodbc architectures, odbc architecture, driver architecture, as well as, unicode or ascii settings.
Jun 24, 2012
Project Member #10 mkleehammer
pyodbc does call a bunch of functions to examine the datatypes of returned data.  It is possible that isql doesn't care and simply asks for everything to be put into strings.

Can you generate an ODBC trace of each?  If we see how they differ, I might be able to work around the problem.
Jun 25, 2012
#11 cyborg10...@gmail.com
@mkleehammer  How do I perform an ODBC trace?  Never done one before!

Thanks
Jun 26, 2012
Project Member #12 mkleehammer
If you are using a DSN from your odbc.ini file, you would add the following two lines under your DSN entry:

[<yourdsn>]
Trace       = On
TraceFile   = /tmp/sql.log

(The value for Trace may be Yes...)

I should create a wiki page for this.
Jun 26, 2012
#13 cyborg10...@gmail.com
[MySQL]
Description = MySQL
Driver = MySQL ODBC 5.1 Driver
Server = webserver
Database = web
Port = 3306
User = tool
Password = ****
Trace = yes
TraceFile = /home//joe/sql.log

I've also tried "on", but not getting a log (even if I run a command that works).

Jun 26, 2012
#15 dkleeham...@gmail.com
Good morning,

From MySQL's documentation, it says the settings the driver will recognize are:
TraceFile  = /tmp/odbc.trace
Trace      = 1

Also, check your TraceFile settings, the one you posted has two forward slashes, which shouldn't matter, but some drivers are picky.  Your code will also need write privileges to your directory.
Jun 26, 2012
#16 cyborg10...@gmail.com
Still not getting any output.  I've set Trace=1 and removed the double slash.

This DSN is set in my home .odbc.ini, does this trace need to be done from the /etc/odbc.ini?
Jun 26, 2012
#17 dkleeham...@gmail.com
You will need to set this in whichever odbc.ini file you are using the DSN from.  If the DSN you are using is located in the /etc/odbc.ini, then yes.  You may need to comment out your home directory DSN or the /etc/odbc.ini to confirm which DSN you are using and then update that one with the trace file settings.

Jun 26, 2012
#18 cyborg10...@gmail.com
The DSN is specified in my home (the one I've copied here), which is where I've put the Trace=1/yes/on and TraceFile.

I'm going to try this on another box to see if there's any difference.
Aug 28, 2012
#19 cyborg10...@gmail.com
I can confirm that 3.0.6 has fixed this issue
Sep 29, 2012
Project Member #20 mkleehammer
(No comment was entered for this change.)
Status: Complete