-
-
Notifications
You must be signed in to change notification settings - Fork 712
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Research: demonstrate if parallel SQL queries are worthwhile #1727
Comments
Wrote more about that here: https://simonwillison.net/2022/Apr/27/parallel-queries/ Compare https://latest-with-plugins.datasette.io/github/commits?_facet=repo&_facet=committer&_trace=1 With the same thing but with parallel execution disabled: Those total page load time numbers are very similar. Is this parallel optimization worthwhile? Maybe it's only worth it on larger databases? Or maybe larger databases perform worse with this? |
I just remembered the Lines 109 to 113 in 942411e
Would explain why the first trace never seems to show more than three SQL queries executing at once. |
One weird thing: I noticed that in the parallel trace above the SQL query bars are wider. Mousover shows duration in ms, and I got 13ms for this query:
But in the Given those numbers though I would expect the overall page time to be MUCH worse for the parallel version - but the page load times are instead very close to each other, with parallel often winning. This is super-weird. |
Relevant: here's the code that sets up a Datasette SQLite connection: datasette/datasette/database.py Lines 73 to 96 in 7a6654a
It's using
This is why Datasette reserves a single connection for write queries and queues them up in memory, as described here. |
I think I need some much more in-depth tracing tricks for this. https://www.maartenbreddels.com/perf/jupyter/python/tracing/gil/2021/01/14/Tracing-the-Python-GIL.html looks relevant - uses the |
Also useful: https://avi.im/blag/2021/fast-sqlite-inserts/ - from a tip on Twitter: https://twitter.com/ricardoanderegg/status/1519402047556235264 |
Something worth digging into: are these parallel queries running against the same SQLite connection or are they each rubbing against a separate SQLite connection? Just realized I know the answer: they're running against separate SQLite connections, because that's how the time limit mechanism works: it installs a progress handler for each connection which terminates it after a set time. This means that if SQLite benefits from multiple threads using the same connection (due to shared caches or similar) then Datasette will not be seeing those benefits. It also means that if there's some mechanism within SQLite that penalizes you for having multiple parallel connections to a single file (just guessing here, maybe there's some kind of locking going on?) then Datasette will suffer those penalties. I should try seeing what happens with WAL mode enabled. |
You don't want to re-use an SQLite connection from multiple threads anyway: https://www.sqlite.org/threadsafe.html Multiple connections can operate on the file in parallel, but a single connection can't:
(emphasis mine) |
I've only skimmed above but it looks like you're doing mainly read-only queries? WAL mode is about better interactions between writers & readers, primarily. |
Yeah all of this is pretty much assuming read-only connections. Datasette has a separate mechanism for ensuring that writes are executed one at a time against a dedicated connection from an in-memory queue: |
WAL mode didn't seem to make a difference. I thought there was a chance it might help multiple read connections operate at the same time but it looks like it really does only matter for when writes are going on. |
This looks VERY relevant: SQLite Shared-Cache Mode:
Enabled as part of the URI filename:
Turns out I'm already using this for in-memory databases that have datasette/datasette/database.py Lines 73 to 75 in 7a6654a
|
Tried that and it didn't seem to make a difference either. I really need a much deeper view of what's going on here. |
Another avenue: https://twitter.com/weargoggles/status/1519426289920270337
Doesn't look like there's an obvious way to access that from Python via the |
Really wild idea: what if I created three copies of the SQLite database file - as three separate file names - and then balanced the parallel queries across all these? Any chance that could avoid any mysterious locking issues? |
I wonder if it would be worth exploring multiprocessing here. |
I should check my timing mechanism. Am I capturing the time taken just in SQLite or does it include time spent in Python crossing between async and threaded world and waiting for a thread pool worker to become available? That could explain the longer query times. |
Here's where read queries are instrumented: datasette/datasette/database.py Lines 241 to 242 in 7a6654a
So the instrumentation is actually capturing quite a bit of Python activity before it gets to SQLite: datasette/datasette/database.py Lines 179 to 190 in 7a6654a
And then: datasette/datasette/database.py Lines 204 to 233 in 7a6654a
Ideally I'd like that |
Though it would be interesting to also have the trace reveal how much time is spent in the functions that wrap that core SQL - the stuff that is being measured at the moment. I have a hunch that this could help solve the over-arching performance mystery. |
Tried this but I'm getting back an empty JSON array of traces at the bottom of the page most of the time (intermittently it works correctly): diff --git a/datasette/database.py b/datasette/database.py
index ba594a8..d7f9172 100644
--- a/datasette/database.py
+++ b/datasette/database.py
@@ -7,7 +7,7 @@ import sys
import threading
import uuid
-from .tracer import trace
+from .tracer import trace, trace_child_tasks
from .utils import (
detect_fts,
detect_primary_keys,
@@ -207,30 +207,31 @@ class Database:
time_limit_ms = custom_time_limit
with sqlite_timelimit(conn, time_limit_ms):
- try:
- cursor = conn.cursor()
- cursor.execute(sql, params if params is not None else {})
- max_returned_rows = self.ds.max_returned_rows
- if max_returned_rows == page_size:
- max_returned_rows += 1
- if max_returned_rows and truncate:
- rows = cursor.fetchmany(max_returned_rows + 1)
- truncated = len(rows) > max_returned_rows
- rows = rows[:max_returned_rows]
- else:
- rows = cursor.fetchall()
- truncated = False
- except (sqlite3.OperationalError, sqlite3.DatabaseError) as e:
- if e.args == ("interrupted",):
- raise QueryInterrupted(e, sql, params)
- if log_sql_errors:
- sys.stderr.write(
- "ERROR: conn={}, sql = {}, params = {}: {}\n".format(
- conn, repr(sql), params, e
+ with trace("sql", database=self.name, sql=sql.strip(), params=params):
+ try:
+ cursor = conn.cursor()
+ cursor.execute(sql, params if params is not None else {})
+ max_returned_rows = self.ds.max_returned_rows
+ if max_returned_rows == page_size:
+ max_returned_rows += 1
+ if max_returned_rows and truncate:
+ rows = cursor.fetchmany(max_returned_rows + 1)
+ truncated = len(rows) > max_returned_rows
+ rows = rows[:max_returned_rows]
+ else:
+ rows = cursor.fetchall()
+ truncated = False
+ except (sqlite3.OperationalError, sqlite3.DatabaseError) as e:
+ if e.args == ("interrupted",):
+ raise QueryInterrupted(e, sql, params)
+ if log_sql_errors:
+ sys.stderr.write(
+ "ERROR: conn={}, sql = {}, params = {}: {}\n".format(
+ conn, repr(sql), params, e
+ )
)
- )
- sys.stderr.flush()
- raise
+ sys.stderr.flush()
+ raise
if truncate:
return Results(rows, truncated, cursor.description)
@@ -238,9 +239,8 @@ class Database:
else:
return Results(rows, False, cursor.description)
- with trace("sql", database=self.name, sql=sql.strip(), params=params):
- results = await self.execute_fn(sql_operation_in_thread)
- return results
+ with trace_child_tasks():
+ return await self.execute_fn(sql_operation_in_thread)
@property
def size(self): |
Asked on the SQLite forum about this here: https://sqlite.org/forum/forumpost/ffbfa9f38e |
I could try |
Maybe this is the Python GIL after all? I've been hoping that the GIL won't be an issue because the So I've been hoping this means that SQLite code itself can run concurrently on multiple cores even when Python threads cannot. But maybe I'm misunderstanding how that works? |
I ran The area on the right is the threads running the DB queries: Interactive version here: https://static.simonwillison.net/static/2022/datasette-parallel-profile.svg |
Useful theory from Keith Medcalf https://sqlite.org/forum/forumpost/e363c69d3441172e
So maybe this is a GIL thing. I should test with some expensive SQL queries (maybe big aggregations against large tables) and see if I can spot an improvement there. |
I could experiment with the Code examples: https://cs.github.com/?scopeName=All+repos&scope=&q=run_in_executor+ProcessPoolExecutor |
The two most promising theories at the moment, from here and Twitter and the SQLite forum, are:
A couple of ways to research the in-memory theory:
I need to do some more, better benchmarks using these different approaches. https://twitter.com/laurencerowe/status/1519780174560169987 also suggests:
I like that second idea a lot - I could use the mandelbrot example from https://www.sqlite.org/lang_with.html#outlandish_recursive_query_examples |
Here's a very useful (recent) article about how the GIL works and how to think about it: https://pythonspeed.com/articles/python-gil/ - via https://lobste.rs/s/9hj80j/when_python_can_t_thread_deep_dive_into_gil From that article:
That explains what I'm seeing here. I'm pretty convinced now that the reason I'm not getting a performance boost from parallel queries is that there's more time spent in Python code assembling the results than in SQLite C code executing the query. |
It would be really fun to try running this with the in-development There's a Docker container for it: https://hub.docker.com/r/nogil/python It suggests you can run something like this:
|
OK, I just got the most incredible result with that! I started up a container running
Then in
Then I started Datasette against my
I hit the following two URLs to compare the parallel v.s. not parallel implementations:
And... the parallel one beat the non-parallel one decisively, on multiple page refreshes! Not parallel: 77ms Parallel: 47ms So yeah, I'm very confident this is a problem with the GIL. And I am absolutely stunned that @colesbury's fork ran Datasette (which has some reasonably tricky threading and async stuff going on) out of the box! |
Something to consider if I look into subprocesses for parallel query execution: https://sqlite.org/howtocorrupt.html#_carrying_an_open_database_connection_across_a_fork_
|
from your analysis, it seems like the GIL is blocking on loading of the data from sqlite to python, (particularly in the this is probably a simplistic idea, but what if you had the python code in the something like: with sqlite_timelimit(conn, time_limit_ms):
try:
cursor = conn.cursor()
cursor.execute(sql, params if params is not None else {})
except:
...
max_returned_rows = self.ds.max_returned_rows
if max_returned_rows == page_size:
max_returned_rows += 1
if max_returned_rows and truncate:
for i, row in enumerate(cursor):
yield row
if i == max_returned_rows - 1:
break
else:
for row in cursor:
yield row
truncated = False this kind of thing works well with a postgres server side cursor, but i'm not sure if it will hold for sqlite. you would still spend about the same amount of time in python and would be contending for the gil, but it would be could be non blocking. depending on the data flow, this could also some benefit for memory. (data stays in more compact sqlite-land until you need it) |
I added parallel SQL query execution here:
asyncio.gather()
#1723My hunch is that this will take advantage of multiple cores, since Python's
sqlite3
module releases the GIL once a query is passed to SQLite.I'd really like to prove this is the case though. Just not sure how to do it!
Larger question: is this performance optimization actually improving performance at all? Under what circumstances is it worthwhile?
The text was updated successfully, but these errors were encountered: