You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I need to work with several large database tables in Julia. Executing the query currently takes a long time due to all of the data being transferred (using ODBC.query). Does ODBC.jl have a way to loop over individual rows in a sequential manner, similar to readline() when working with text files, or Pyodbc's fetchone?
I think there might be some functionality available that can be used here, but I can't fully parse the documentation.
Thanks for making this package.
The text was updated successfully, but these errors were encountered:
source = ODBC.Source(dsn, query) # executes the query against the DB and returns initial result chunk
types = Data.types(source)
row = col =1while!Data.isdone(source, row, col)
for col =1:size(source, 2)
val = Data.streamfrom(source, Data.Field, types[col], row, col)
endend
Obviously this isn't super pretty, but indeed possible. Suggestions for a better API? I mean, we could define the iteration protocol on an ODBC.Source to return an entire row as a tuple, maybe that would be best.
I need to work with several large database tables in Julia. Executing the query currently takes a long time due to all of the data being transferred (using ODBC.query). Does ODBC.jl have a way to loop over individual rows in a sequential manner, similar to readline() when working with text files, or Pyodbc's fetchone?
I think there might be some functionality available that can be used here, but I can't fully parse the documentation.
Thanks for making this package.
The text was updated successfully, but these errors were encountered: