Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Memory leak on odbc 2.4.x when querying a table with columns in TEXT type of SQLite DB #304

Closed
victorshengchen opened this issue Jan 7, 2023 · 2 comments · Fixed by #306

Comments

@victorshengchen
Copy link

victorshengchen commented Jan 7, 2023

Describe your system

  • odbc Package Version: 2.4.6
  • ODBC Driver: libsqlite3odbc.so (installed with cmd: apt-get install -y libsqliteodbc)
  • Database Name: SQLite
  • Database Version: 3
  • Database OS: Debian GNU/Linux 11
  • Node.js Version: 17.6
  • Node.js OS: Debian GNU/Linux 11 (it's a container with nodejs: base image node:17.6)

Describe the bug
We have been using odbc 2.3.6 to access our sqlite db with no issue for two years. After we upgrade to odbc 2.4.6, we can see memory leak whenever we do query on a table with TEXT columns

Expected behavior
The process memory usage should keep stable.

To Reproduce
Steps to reproduce the behavior:

  1. in sqlite DB, CREATE TABLE testTable (myKey TEXT, name TEXT, class TEXT, description TEXT, grade TEXT);
  2. insert hunders of records into above table
  3. run below code snippets with odbc 2.3.6, everything looks fine
  4. run below code snippets with odbc 2.4.6, the memory usage of the process can be very high (>10G)
  5. if I use new query parameter {initialBufferSize:255}, 2.4.x only, it can slow down the memory increase, but still see the leak.
  6. if there is no TEXT column in the table, odbc 2.4.6 can work well with below code snippets.

Code

var odbc = require("odbc");
async function testDB() {
	const connConfig = {
		connectionString: 'DSN=AppDB',
		connectionTimeout: 10,
		loginTimeout: 10,
		multipleStatements: true,
	};
	try{
		var tempconn = await odbc.connect(connConfig);		
		for (var i=0; i<1000; i++) {
			var results = await tempconn.query("SELECT * FROM testTable"); //parameter {initialBufferSize:255}; can slow down the crash 
			console.log(`query done and get result ${i} ${results.length}`);
		}
	}catch(e){
		console.log(`*******Exception*******: Connect to DB ${connConfig.connectionString} failed`);
	}
}
  • The offending code:
  • Any DEBUG information printed to the terminal: NA
  • Any error information returned from a function call: NA

Additional context
Have tried 2.4.0, same issue as 2.4.6. So the issue should exist for release of 2.4.x.

@kadler
Copy link
Member

kadler commented Jan 9, 2023

I was able to recreate by inserting 500 records and running through valgrind. After only 3 loops, this was the result:

==119407== 586,153,984 bytes in 559 blocks are possibly lost in loss record 308 of 309
==119407==    at 0x484386F: malloc (vg_replace_malloc.c:393)
==119407==    by 0x16820F66: fetch_and_store(StatementData*, bool, bool*) (odbc_connection.cpp:3616)
==119407==    by 0x168215ED: fetch_all_and_store(StatementData*, bool, bool*) (odbc_connection.cpp:3977)
==119407==    by 0x168274F3: QueryAsyncWorker::Execute() (odbc_connection.cpp:935)
==119407==    by 0x7DEBA14: worker (threadpool.c:122)
==119407==    by 0x7C6C14C: start_thread (pthread_create.c:442)
==119407==    by 0x7CECBB3: clone (clone.S:100)
==119407== 
==119407== 7,293,894,656 bytes in 6,956 blocks are definitely lost in loss record 309 of 309
==119407==    at 0x484386F: malloc (vg_replace_malloc.c:393)
==119407==    by 0x16820F66: fetch_and_store(StatementData*, bool, bool*) (odbc_connection.cpp:3616)
==119407==    by 0x168215ED: fetch_all_and_store(StatementData*, bool, bool*) (odbc_connection.cpp:3977)
==119407==    by 0x168274F3: QueryAsyncWorker::Execute() (odbc_connection.cpp:935)
==119407==    by 0x7DEBA14: worker (threadpool.c:122)
==119407==    by 0x7C6C14C: start_thread (pthread_create.c:442)
==119407==    by 0x7CECBB3: clone (clone.S:100)

The problem seems to be that when long data is used, the bind_type doesn't get copied from data->columns[column_index]->bind_type in to the ColumnData object and will instead be left as the default constructed value (0). This means when the destructor runs, the allocated data will not be freed as it should be.

@Rohit-Parte
Copy link

@kadler @markdirish can someone please quickly review and merge this PR. We're facing same problem since 2 months.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants