Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zvol_threads default value leads to low synchronous write performance #392

Closed
dechamps opened this issue Sep 7, 2011 · 4 comments
Closed
Labels
Type: Feature Feature request or new feature
Milestone

Comments

@dechamps
Copy link
Contributor

dechamps commented Sep 7, 2011

Currently, the zvol_threads variable, which controls the number of worker threads which process items from the ZVOL queues, is set to the number of available CPUs.

This choice seems to be based on the assumption that ZVOL threads are CPU-bound. This is not necessarily true, especially for synchronous writes. Consider the situation described in the comments for zil_commit(), which is called inside zvol_write() for synchronous writes:

itxs are committed in batches. In a heavily stressed zil there will be a commit writer thread who is writing out a bunch of itxs to the log for a set of committing threads (cthreads) in the same batch as the writer. Those cthreads are all waiting on the same cv for that batch.

There will also be a different and growing batch of threads that are waiting to commit (qthreads). When the committing batch completes a transition occurs such that the cthreads exit and the qthreads become cthreads. One of the new cthreads becomes he writer thread for the batch. Any new threads arriving become new qthreads.

We can easily deduce that, in the case of ZVOLs, there can be a maximum of zvol_threads cthreads and qthreads. The default value for zvol_threads is typically between 1 and 8, which is way too low in this case. This means there will be a lot of small commits to the ZIL, which is very inefficient compared to a few big commits, especially since we have to wait for the data to be on stable storage. Increasing the number of threads will increase the amount of data waiting to be commited and thus the size of the individual commits.

On my system, in the context of VM disk image storage (lots of small synchronous writes), increasing zvol_threads from 8 to 32 results in a 50% increase in sequential synchronous write performance.

We should choose a more sensible default for zvol_threads. Unfortunately the optimal value is difficult to determine automatically, since it depends on the synchronous write latency of the underlying storage devices. In any case, a hardcoded value of 32 would probably be better than the current situation. Having a lot of ZVOL threads doesn't seem to have any real downside anyway.

@behlendorf
Copy link
Contributor

Interesting results. So the initial value for zvol_threads was picked on the assumption the threads would be cpu bound. I figured if this was not the case in practice we could revisit this default value. Have you thus far seen any downsides to increasing the number of threads for async requests?

@dechamps
Copy link
Contributor Author

None, but I must admit I didn't really test against async requests. In theory, async performance could be very slightly lower due to the additional context switches, but that should be it.

@stephane-chazelas
Copy link

BTW, why doesn't zvol_threads (and zvol_major) show up in /sys/module/zfs/parameters?

# modinfo zfs | sed -n 's/^parm: *\([^:]*\).*/\1/p' | sort | diff - <(ls /sys/module/zfs/parameters)
64,65d63
< zvol_major
< zvol_threads

@behlendorf behlendorf reopened this Dec 7, 2011
@behlendorf
Copy link
Contributor

I hadn't noticed they were missing from /sys/module/zfs/parameters, it looks like this was caused because the permission bits were set to 0. I'll push a patch shortly to update them to 0444, in the current implementation they can't be made writable safely.

behlendorf added a commit to behlendorf/zfs that referenced this issue Dec 7, 2011
The zvol_major and zvol_threads module options were being created
with 0 permission bits.  This prevented them from being listed in
the /sys/module/zfs/parameters/ directory, although they were
visible in `modinfo zfs`.  This patch fixes the issue by updating
the permission bits to 0444.  For the moment these options must
be read-only because they are used during module initialization.

Signed-off-by: Brian Behlendorf <[email protected]>
Issue openzfs#392
Rudd-O pushed a commit to Rudd-O/zfs that referenced this issue Feb 1, 2012
The zvol_major and zvol_threads module options were being created
with 0 permission bits.  This prevented them from being listed in
the /sys/module/zfs/parameters/ directory, although they were
visible in `modinfo zfs`.  This patch fixes the issue by updating
the permission bits to 0444.  For the moment these options must
be read-only because they are used during module initialization.

Signed-off-by: Brian Behlendorf <[email protected]>
Issue openzfs#392
dechamps added a commit to dechamps/zfs that referenced this issue Feb 8, 2012
Currently, the `zvol_threads` variable, which controls the number of worker
threads which process items from the ZVOL queues, is set to the number of
available CPUs.

This choice seems to be based on the assumption that ZVOL threads are
CPU-bound. This is not necessarily true, especially for synchronous writes.
Consider the situation described in the comments for `zil_commit()`, which is
called inside `zvol_write()` for synchronous writes:

> itxs are committed in batches. In a heavily stressed zil there will be a
> commit writer thread who is writing out a bunch of itxs to the log for a
> set of committing threads (cthreads) in the same batch as the writer.
> Those cthreads are all waiting on the same cv for that batch.
>
> There will also be a different and growing batch of threads that are
> waiting to commit (qthreads). When the committing batch completes a
> transition occurs such that the cthreads exit and the qthreads become
> cthreads. One of the new cthreads becomes he writer thread for the batch.
> Any new threads arriving become new qthreads.

We can easily deduce that, in the case of ZVOLs, there can be a maximum of
`zvol_threads` cthreads and qthreads. The default value for `zvol_threads` is
typically between 1 and 8, which is way too low in this case. This means
there will be a lot of small commits to the ZIL, which is very inefficient
compared to a few big commits, especially since we have to wait for the data
to be on stable storage. Increasing the number of threads will increase the
amount of data waiting to be commited and thus the size of the individual
commits.

On my system, in the context of VM disk image storage (lots of small
synchronous writes), increasing `zvol_threads` from 8 to 32 results in a 50%
increase in sequential synchronous write performance.

We should choose a more sensible default for `zvol_threads`. Unfortunately
the optimal value is difficult to determine automatically, since it depends
on the synchronous write latency of the underlying storage devices. In any
case, a hardcoded value of 32 would probably be better than the current
situation. Having a lot of ZVOL threads doesn't seem to have any real
downside anyway.

Fixes openzfs#392.
pcd1193182 added a commit to pcd1193182/zfs that referenced this issue Aug 12, 2021
sdimitro pushed a commit to sdimitro/zfs that referenced this issue May 23, 2022
Currently, disks are added to the zettacache implicitly, by specifying
new devices on the command line with the `-c PATH` argument.

This commit adds a way to add disks to the zettacache without restarting
the agent, by running `zcache add PATH`.

Additionally, a `zcache sync [--merge]` subcommand is added, to sync a
checkpoint (and optionally request an immediate merge of the index).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Feature Feature request or new feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants