Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Research: demonstrate if parallel SQL queries are worthwhile #1727

Open
simonw opened this issue Apr 27, 2022 · 32 comments
Open

Research: demonstrate if parallel SQL queries are worthwhile #1727

simonw opened this issue Apr 27, 2022 · 32 comments

Comments

@simonw
Copy link
Owner

simonw commented Apr 27, 2022

I added parallel SQL query execution here:

My hunch is that this will take advantage of multiple cores, since Python's sqlite3 module releases the GIL once a query is passed to SQLite.

I'd really like to prove this is the case though. Just not sure how to do it!

Larger question: is this performance optimization actually improving performance at all? Under what circumstances is it worthwhile?

@simonw simonw changed the title Research: demonstrate if parallel SQL queries take advantage of multiple cores Research: demonstrate if parallel SQL queries are worthwhile Apr 27, 2022
@simonw
Copy link
Owner Author

simonw commented Apr 27, 2022

Wrote more about that here: https://simonwillison.net/2022/Apr/27/parallel-queries/

Compare https://latest-with-plugins.datasette.io/github/commits?_facet=repo&_facet=committer&_trace=1

image

With the same thing but with parallel execution disabled:

https://latest-with-plugins.datasette.io/github/commits?_facet=repo&_facet=committer&_trace=1&_noparallel=1

image

Those total page load time numbers are very similar. Is this parallel optimization worthwhile?

Maybe it's only worth it on larger databases? Or maybe larger databases perform worse with this?

@simonw
Copy link
Owner Author

simonw commented Apr 27, 2022

I just remembered the --setting num_sql_threads option... which defaults to 3!

datasette/datasette/app.py

Lines 109 to 113 in 942411e

Setting(
"num_sql_threads",
3,
"Number of threads in the thread pool for executing SQLite queries",
),

Would explain why the first trace never seems to show more than three SQL queries executing at once.

@simonw
Copy link
Owner Author

simonw commented Apr 27, 2022

One weird thing: I noticed that in the parallel trace above the SQL query bars are wider. Mousover shows duration in ms, and I got 13ms for this query:

select message as value, count(*) as n from (

But in the ?_noparallel=1 version that some query took 2.97ms.

Given those numbers though I would expect the overall page time to be MUCH worse for the parallel version - but the page load times are instead very close to each other, with parallel often winning.

This is super-weird.

@simonw
Copy link
Owner Author

simonw commented Apr 27, 2022

Relevant: here's the code that sets up a Datasette SQLite connection:

def connect(self, write=False):
if self.memory_name:
uri = "file:{}?mode=memory&cache=shared".format(self.memory_name)
conn = sqlite3.connect(
uri,
uri=True,
check_same_thread=False,
)
if not write:
conn.execute("PRAGMA query_only=1")
return conn
if self.is_memory:
return sqlite3.connect(":memory:", uri=True)
# mode=ro or immutable=1?
if self.is_mutable:
qs = "?mode=ro"
else:
qs = "?immutable=1"
assert not (write and not self.is_mutable)
if write:
qs = ""
return sqlite3.connect(
f"file:{self.path}{qs}", uri=True, check_same_thread=False
)

It's using check_same_thread=False - here's the Python docs on that:

By default, check_same_thread is True and only the creating thread may use the connection. If set False, the returned connection may be shared across multiple threads. When using multiple threads with the same connection writing operations should be serialized by the user to avoid data corruption.

This is why Datasette reserves a single connection for write queries and queues them up in memory, as described here.

@simonw
Copy link
Owner Author

simonw commented Apr 27, 2022

I think I need some much more in-depth tracing tricks for this.

https://www.maartenbreddels.com/perf/jupyter/python/tracing/gil/2021/01/14/Tracing-the-Python-GIL.html looks relevant - uses the perf tool on Linux.

@simonw
Copy link
Owner Author

simonw commented Apr 27, 2022

@simonw
Copy link
Owner Author

simonw commented Apr 27, 2022

Something worth digging into: are these parallel queries running against the same SQLite connection or are they each rubbing against a separate SQLite connection?

Just realized I know the answer: they're running against separate SQLite connections, because that's how the time limit mechanism works: it installs a progress handler for each connection which terminates it after a set time.

This means that if SQLite benefits from multiple threads using the same connection (due to shared caches or similar) then Datasette will not be seeing those benefits.

It also means that if there's some mechanism within SQLite that penalizes you for having multiple parallel connections to a single file (just guessing here, maybe there's some kind of locking going on?) then Datasette will suffer those penalties.

I should try seeing what happens with WAL mode enabled.

@glyph
Copy link

glyph commented Apr 27, 2022

You don't want to re-use an SQLite connection from multiple threads anyway: https://www.sqlite.org/threadsafe.html

Multiple connections can operate on the file in parallel, but a single connection can't:

Multi-thread. In this mode, SQLite can be safely used by multiple threads provided that no single database connection is used simultaneously in two or more threads.

(emphasis mine)

@glyph
Copy link

glyph commented Apr 27, 2022

I should try seeing what happens with WAL mode enabled.

I've only skimmed above but it looks like you're doing mainly read-only queries? WAL mode is about better interactions between writers & readers, primarily.

@simonw
Copy link
Owner Author

simonw commented Apr 27, 2022

Yeah all of this is pretty much assuming read-only connections. Datasette has a separate mechanism for ensuring that writes are executed one at a time against a dedicated connection from an in-memory queue:

@simonw
Copy link
Owner Author

simonw commented Apr 27, 2022

WAL mode didn't seem to make a difference. I thought there was a chance it might help multiple read connections operate at the same time but it looks like it really does only matter for when writes are going on.

@simonw
Copy link
Owner Author

simonw commented Apr 27, 2022

This looks VERY relevant: SQLite Shared-Cache Mode:

SQLite includes a special "shared-cache" mode (disabled by default) intended for use in embedded servers. If shared-cache mode is enabled and a thread establishes multiple connections to the same database, the connections share a single data and schema cache. This can significantly reduce the quantity of memory and IO required by the system.

Enabled as part of the URI filename:

ATTACH 'file:aux.db?cache=shared' AS aux;

Turns out I'm already using this for in-memory databases that have .memory_name set, but not (yet) for regular file-backed databases:

def connect(self, write=False):
if self.memory_name:
uri = "file:{}?mode=memory&cache=shared".format(self.memory_name)

@simonw
Copy link
Owner Author

simonw commented Apr 27, 2022

Tried that and it didn't seem to make a difference either.

I really need a much deeper view of what's going on here.

@simonw
Copy link
Owner Author

simonw commented Apr 27, 2022

Another avenue: https://twitter.com/weargoggles/status/1519426289920270337

SQLite has its own mutexes to provide thread safety, which as another poster noted are out of play in multi process setups. Perhaps downgrading from the “serializable” to “multi-threaded” safety would be okay for Datasette? https://sqlite.org/c3ref/c_config_covering_index_scan.html#sqliteconfigmultithread

Doesn't look like there's an obvious way to access that from Python via the sqlite3 module though.

@simonw
Copy link
Owner Author

simonw commented Apr 27, 2022

Really wild idea: what if I created three copies of the SQLite database file - as three separate file names - and then balanced the parallel queries across all these? Any chance that could avoid any mysterious locking issues?

@simonw
Copy link
Owner Author

simonw commented Apr 27, 2022

I wonder if it would be worth exploring multiprocessing here.

@simonw
Copy link
Owner Author

simonw commented Apr 27, 2022

I should check my timing mechanism. Am I capturing the time taken just in SQLite or does it include time spent in Python crossing between async and threaded world and waiting for a thread pool worker to become available?

That could explain the longer query times.

@simonw
Copy link
Owner Author

simonw commented Apr 28, 2022

Here's where read queries are instrumented:

with trace("sql", database=self.name, sql=sql.strip(), params=params):
results = await self.execute_fn(sql_operation_in_thread)

So the instrumentation is actually capturing quite a bit of Python activity before it gets to SQLite:

async def execute_fn(self, fn):
def in_thread():
conn = getattr(connections, self.name, None)
if not conn:
conn = self.connect()
self.ds._prepare_connection(conn, self.name)
setattr(connections, self.name, conn)
return fn(conn)
return await asyncio.get_event_loop().run_in_executor(
self.ds.executor, in_thread
)

And then:

def sql_operation_in_thread(conn):
time_limit_ms = self.ds.sql_time_limit_ms
if custom_time_limit and custom_time_limit < time_limit_ms:
time_limit_ms = custom_time_limit
with sqlite_timelimit(conn, time_limit_ms):
try:
cursor = conn.cursor()
cursor.execute(sql, params if params is not None else {})
max_returned_rows = self.ds.max_returned_rows
if max_returned_rows == page_size:
max_returned_rows += 1
if max_returned_rows and truncate:
rows = cursor.fetchmany(max_returned_rows + 1)
truncated = len(rows) > max_returned_rows
rows = rows[:max_returned_rows]
else:
rows = cursor.fetchall()
truncated = False
except (sqlite3.OperationalError, sqlite3.DatabaseError) as e:
if e.args == ("interrupted",):
raise QueryInterrupted(e, sql, params)
if log_sql_errors:
sys.stderr.write(
"ERROR: conn={}, sql = {}, params = {}: {}\n".format(
conn, repr(sql), params, e
)
)
sys.stderr.flush()
raise

Ideally I'd like that trace() block to wrap just the cursor.execute() and cursor.fetchmany(...) or cursor.fetchall() calls.

@simonw
Copy link
Owner Author

simonw commented Apr 28, 2022

Though it would be interesting to also have the trace reveal how much time is spent in the functions that wrap that core SQL - the stuff that is being measured at the moment.

I have a hunch that this could help solve the over-arching performance mystery.

@simonw
Copy link
Owner Author

simonw commented Apr 28, 2022

Tried this but I'm getting back an empty JSON array of traces at the bottom of the page most of the time (intermittently it works correctly):

diff --git a/datasette/database.py b/datasette/database.py
index ba594a8..d7f9172 100644
--- a/datasette/database.py
+++ b/datasette/database.py
@@ -7,7 +7,7 @@ import sys
 import threading
 import uuid
 
-from .tracer import trace
+from .tracer import trace, trace_child_tasks
 from .utils import (
     detect_fts,
     detect_primary_keys,
@@ -207,30 +207,31 @@ class Database:
                 time_limit_ms = custom_time_limit
 
             with sqlite_timelimit(conn, time_limit_ms):
-                try:
-                    cursor = conn.cursor()
-                    cursor.execute(sql, params if params is not None else {})
-                    max_returned_rows = self.ds.max_returned_rows
-                    if max_returned_rows == page_size:
-                        max_returned_rows += 1
-                    if max_returned_rows and truncate:
-                        rows = cursor.fetchmany(max_returned_rows + 1)
-                        truncated = len(rows) > max_returned_rows
-                        rows = rows[:max_returned_rows]
-                    else:
-                        rows = cursor.fetchall()
-                        truncated = False
-                except (sqlite3.OperationalError, sqlite3.DatabaseError) as e:
-                    if e.args == ("interrupted",):
-                        raise QueryInterrupted(e, sql, params)
-                    if log_sql_errors:
-                        sys.stderr.write(
-                            "ERROR: conn={}, sql = {}, params = {}: {}\n".format(
-                                conn, repr(sql), params, e
+                with trace("sql", database=self.name, sql=sql.strip(), params=params):
+                    try:
+                        cursor = conn.cursor()
+                        cursor.execute(sql, params if params is not None else {})
+                        max_returned_rows = self.ds.max_returned_rows
+                        if max_returned_rows == page_size:
+                            max_returned_rows += 1
+                        if max_returned_rows and truncate:
+                            rows = cursor.fetchmany(max_returned_rows + 1)
+                            truncated = len(rows) > max_returned_rows
+                            rows = rows[:max_returned_rows]
+                        else:
+                            rows = cursor.fetchall()
+                            truncated = False
+                    except (sqlite3.OperationalError, sqlite3.DatabaseError) as e:
+                        if e.args == ("interrupted",):
+                            raise QueryInterrupted(e, sql, params)
+                        if log_sql_errors:
+                            sys.stderr.write(
+                                "ERROR: conn={}, sql = {}, params = {}: {}\n".format(
+                                    conn, repr(sql), params, e
+                                )
                             )
-                        )
-                        sys.stderr.flush()
-                    raise
+                            sys.stderr.flush()
+                        raise
 
             if truncate:
                 return Results(rows, truncated, cursor.description)
@@ -238,9 +239,8 @@ class Database:
             else:
                 return Results(rows, False, cursor.description)
 
-        with trace("sql", database=self.name, sql=sql.strip(), params=params):
-            results = await self.execute_fn(sql_operation_in_thread)
-        return results
+        with trace_child_tasks():
+            return await self.execute_fn(sql_operation_in_thread)
 
     @property
     def size(self):

@simonw
Copy link
Owner Author

simonw commented Apr 28, 2022

Asked on the SQLite forum about this here: https://sqlite.org/forum/forumpost/ffbfa9f38e

@simonw
Copy link
Owner Author

simonw commented Apr 28, 2022

I could try py-spy top, which I previously used here:

@simonw
Copy link
Owner Author

simonw commented Apr 28, 2022

Maybe this is the Python GIL after all?

I've been hoping that the GIL won't be an issue because the sqlite3 module releases the GIL for the duration of the execution of a SQL query - see https://github.com/python/cpython/blob/f348154c8f8a9c254503306c59d6779d4d09b3a9/Modules/_sqlite/cursor.c#L749-L759

So I've been hoping this means that SQLite code itself can run concurrently on multiple cores even when Python threads cannot.

But maybe I'm misunderstanding how that works?

@simonw
Copy link
Owner Author

simonw commented Apr 28, 2022

I ran py-spy and then hammered refresh a bunch of times on the http://127.0.0.1:8856/github/commits?_facet=repo&_facet=committer&_trace=1&_noparallel= page - it generated this SVG profile for me.

The area on the right is the threads running the DB queries:

profile

Interactive version here: https://static.simonwillison.net/static/2022/datasette-parallel-profile.svg

@simonw
Copy link
Owner Author

simonw commented Apr 28, 2022

Useful theory from Keith Medcalf https://sqlite.org/forum/forumpost/e363c69d3441172e

This is true, but the concurrency is limited to the execution which occurs with the GIL released (that is, in the native C sqlite3 library itself). Each row (for example) can be retrieved in parallel but "constructing the python return objects for each row" will be serialized (by the GIL).

That is to say that if your have two python threads each with their own connection, and each one is performing a select that returns 1,000,000 rows (lets say that is 25% of the candidates for each select) then the difference in execution time between executing two python threads in parallel vs a single serial thead will not be much different (if even detectable at all). In fact it is possible that the multiple-threaded version takes longer to run both queries to completion because of the increased contention over a shared resource (the GIL).

So maybe this is a GIL thing.

I should test with some expensive SQL queries (maybe big aggregations against large tables) and see if I can spot an improvement there.

@simonw
Copy link
Owner Author

simonw commented Apr 28, 2022

I could experiment with the await asyncio.run_in_executor(processpool_executor, fn) mechanism described in https://stackoverflow.com/a/29147750

Code examples: https://cs.github.com/?scopeName=All+repos&scope=&q=run_in_executor+ProcessPoolExecutor

@simonw
Copy link
Owner Author

simonw commented Apr 28, 2022

The two most promising theories at the moment, from here and Twitter and the SQLite forum, are:

  • SQLite is I/O bound - it generally only goes as fast as it can load data from disk. Multiple connections all competing for the same file on disk are going to end up blocked at the file system layer. But maybe this means in-memory databases will perform better?
  • It's the GIL. The sqlite3 C code may release the GIL, but the bits that do things like assembling Row objects to return still happen in Python, and that Python can only run on a single core.

A couple of ways to research the in-memory theory:

I need to do some more, better benchmarks using these different approaches.

https://twitter.com/laurencerowe/status/1519780174560169987 also suggests:

Maybe try:

  1. Copy the sqlite file to /dev/shm and rerun (all in ram.)
  2. Create a CTE which calculates Fibonacci or similar so you can test something completely cpu bound (only return max value or something to avoid crossing between sqlite/Python.)

I like that second idea a lot - I could use the mandelbrot example from https://www.sqlite.org/lang_with.html#outlandish_recursive_query_examples

@simonw
Copy link
Owner Author

simonw commented Apr 29, 2022

Here's a very useful (recent) article about how the GIL works and how to think about it: https://pythonspeed.com/articles/python-gil/ - via https://lobste.rs/s/9hj80j/when_python_can_t_thread_deep_dive_into_gil

From that article:

For example, let's consider an extension module written in C or Rust that lets you talk to a PostgreSQL database server.

Conceptually, handling a SQL query with this library will go through three steps:

  1. Deserialize from Python to the internal library representation. Since this will be reading Python objects, it needs to hold the GIL.
  2. Send the query to the database server, and wait for a response. This doesn't need the GIL.
  3. Convert the response into Python objects. This needs the GIL again.

As you can see, how much parallelism you can get depends on how much time is spent in each step. If the bulk of time is spent in step 2, you'll get parallelism there. But if, for example, you run a SELECT and get a large number of rows back, the library will need to create many Python objects, and step 3 will have to hold GIL for a while.

That explains what I'm seeing here. I'm pretty convinced now that the reason I'm not getting a performance boost from parallel queries is that there's more time spent in Python code assembling the results than in SQLite C code executing the query.

@simonw
Copy link
Owner Author

simonw commented Apr 29, 2022

It would be really fun to try running this with the in-development nogil Python from https://github.com/colesbury/nogil

There's a Docker container for it: https://hub.docker.com/r/nogil/python

It suggests you can run something like this:

docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp \
  -w /usr/src/myapp nogil/python python your-daemon-or-script.py

@simonw
Copy link
Owner Author

simonw commented Apr 29, 2022

OK, I just got the most incredible result with that!

I started up a container running bash like this, from my datasette checkout. I'm mapping port 8005 on my laptop to port 8001 inside the container because laptop port 8001 was already doing something else:

docker run -it --rm --name my-running-script -p 8005:8001 -v "$PWD":/usr/src/myapp \
  -w /usr/src/myapp nogil/python bash

Then in bash I ran the following commands to install Datasette and its dependencies:

pip install -e '.[test]'
pip install datasette-pretty-traces # For debug tracing

Then I started Datasette against my github.db database (from github-to-sqlite.dogsheep.net/github.db) like this:

datasette github.db -h 0.0.0.0 --setting trace_debug 1

I hit the following two URLs to compare the parallel v.s. not parallel implementations:

  • http://127.0.0.1:8005/github/issues?_facet=milestone&_facet=repo&_trace=1&_size=10
  • http://127.0.0.1:8005/github/issues?_facet=milestone&_facet=repo&_trace=1&_size=10&_noparallel=1

And... the parallel one beat the non-parallel one decisively, on multiple page refreshes!

Not parallel: 77ms

Parallel: 47ms

CleanShot 2022-04-28 at 22 10 54@2x

CleanShot 2022-04-28 at 22 10 21@2x

So yeah, I'm very confident this is a problem with the GIL. And I am absolutely stunned that @colesbury's fork ran Datasette (which has some reasonably tricky threading and async stuff going on) out of the box!

@simonw
Copy link
Owner Author

simonw commented Apr 30, 2022

Something to consider if I look into subprocesses for parallel query execution:

https://sqlite.org/howtocorrupt.html#_carrying_an_open_database_connection_across_a_fork_

Do not open an SQLite database connection, then fork(), then try to use that database connection in the child process. All kinds of locking problems will result and you can easily end up with a corrupt database. SQLite is not designed to support that kind of behavior. Any database connection that is used in a child process must be opened in the child process, not inherited from the parent.

@fgregg
Copy link
Contributor

fgregg commented Sep 26, 2022

from your analysis, it seems like the GIL is blocking on loading of the data from sqlite to python, (particularly in the fetchmany call)

this is probably a simplistic idea, but what if you had the python code in the execute method iterate over the cursor and yield out rows or small chunks of rows.

something like:

            with sqlite_timelimit(conn, time_limit_ms):
                try:
                    cursor = conn.cursor()
                    cursor.execute(sql, params if params is not None else {})
                except:
                    ...
            max_returned_rows = self.ds.max_returned_rows
            if max_returned_rows == page_size:
                max_returned_rows += 1
                if max_returned_rows and truncate:
                    for i, row in enumerate(cursor):
                        yield row
                        if i == max_returned_rows - 1:
                            break
                else:
                    for row in cursor:
                        yield row
                    truncated = False                  

this kind of thing works well with a postgres server side cursor, but i'm not sure if it will hold for sqlite.

you would still spend about the same amount of time in python and would be contending for the gil, but it would be could be non blocking.

depending on the data flow, this could also some benefit for memory. (data stays in more compact sqlite-land until you need it)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants