-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do multiple stores on the same node work correctly? #3531
Comments
Hi @tildeleb, you're running into #2067. The tl;dr is that you can't have two copies of the same replica on a single node, which prevents what should happen from happening in your case: The whole keyspace starts out on your first store, and some of it can only make it to the second store if it were to move there. But the strategy for that is "basically" copy-then-delete, which for a short period of time means two copies of the replica on the same node. So,
As of today, you'll need to run This is a bit of a bummer, but stay tuned - you'll be able to do this soon. I'm glad you're taking the time to test us; let us know if you run into anything else that doesn't work as expected. |
Thanks. I did a search before I wrote this issue using keywords like store and EOF but didn't see #2607. I should have searched on replica. Also to be considered with this bug is what happens when storage runs out? Panic is probably not what customers want. I would suggest warnings well ahead of running out that storage. For example, warnings at 70%, 80%, 90%, 95%, 96% and so on. A simple mechanism should suffice, like a url that can get hit when this happens. I guess that begs the question about being able to add stores/nodes on the fly? If this is a dup of #2067, please feel free to close it out. |
yes, there will be prominent warnings on the UI emanating from the cluster when a node runs out of storage. You should be able to add nodes on the fly already (but you need more than the replication factor to avoid the bug above). |
Note that as long as those nodes are all on the same disk, you'll also need |
I am trying to get a config with 2 stores on a single node working. The volumes have about 38 GB of storage available each. I issued the following commands to setup a two store node using volumes
/ssd
and/mnt
.After that I ran a program to load 70 GB of random keys and values I generated. All the ranges end up on /ssd/cdb and when that volume runs out of space cockroach gets the panic "No space left on device".
Questions?
--balance-mode: "range count"
be a workaround?The following log entries would seem to be relevant:
The following is a partial log from the beginning and the end of the log file.
The text was updated successfully, but these errors were encountered: