Skip to content
This repository was archived by the owner on Nov 6, 2020. It is now read-only.

Effect of --pruning-history flags on disk usage behavior not as expected #6584

Closed
julian1 opened this issue Sep 24, 2017 · 4 comments
Closed
Labels
F2-bug 🐞 The client fails to follow expected behavior. M4-core ⛓ Core client code / Rust. Z0-unconfirmed 🤔 Issue might be valid, but it’s not yet known.

Comments

@julian1
Copy link

julian1 commented Sep 24, 2017

Parity version:
$ parity --version | grep version
version Parity/v1.6.10-unstable-1a5b17626-20170721/x86_64-linux-gnu/rustc1.18.0

Operating system:
$ uname -a
Linux p-test 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u2 (2017-06-26) x86_64 GNU/Linux

And installed:
compiled from source

I am trying to run a parity instance on a digital-ocean node with a main parition of 30GB for development purposes. I want to keep some tri-state history, rather than run a completely light node.

Running parity with the following flags,

parity -d data --min-peers 8 --max-peers 12 --no-color  --pruning fast

It syncs quickly, and uses an initial 10GB disk use. According to, https://ethereum.stackexchange.com/questions/143/what-are-the-ethereum-disk-space-needs (last update June), this is correct and subsequent growth should be on the order of 1GB / month.

However, over the course of a few hours, the overlayrecent directory consumes the rest of the drive, and I get a characteristic "No space left on device" error.

$ du -hs data/chains/ethereum/db/906a34e69aec8c0d/overlayrecent/
23G     data/chains/ethereum/db/906a34e69aec8c0d/overlayrecent/

I found suggestions that this tri-state history can be restricted, using the --pruning-history flags, where the default is to keep 64 states. I tried setting it to 16 and then 3.

parity -d data --min-peers 8 --max-peers 12 --no-color  --pruning fast --pruning-history 3

However this leads to the same behavior.

Is this expected? And is there any practical guidance or documentation on the use of this flag? Is it unreasonable to expect to run parity in under 30GB like this?

@5chdn
Copy link
Contributor

5chdn commented Sep 25, 2017

Can you have a look at #6280 ? It looks like you are affected. Wasn't aware this also happens with fast pruning though.

@5chdn 5chdn added F2-bug 🐞 The client fails to follow expected behavior. M4-core ⛓ Core client code / Rust. Z0-unconfirmed 🤔 Issue might be valid, but it’s not yet known. labels Sep 25, 2017
@rphmeier
Copy link
Contributor

rphmeier commented Sep 25, 2017

The minimum pruning history allowed is 8. On the mainnet forks of size 3 or more are not that uncommon and probably happen at least a few time a day. I think the drive running out of memory is mostly a consequence of the state trie being pretty large at this point.

@julian1
Copy link
Author

julian1 commented Sep 26, 2017

Thanks for the information. I am trying to replicate on an aws node with a large drive, in order to check where disk usage stabilizes. But it's failing in an unrelated way during the restoration stage.

2017-09-26 00:38:23 UTC Syncing snapshot 105/361        #0   11/25 peers     3 MiB db    7 KiB chain  0 bytes queue   10 KiB sync  RPC:  0 conn,  0 req/s,   0 µs
2017-09-26 00:38:31 UTC Encountered error during state restoration: Snapshot restoration aborted.
2017-09-26 00:38:32 UTC Syncing       #0 d4e5…8fa3     0 blk/s    0 tx/s   0 Mgas/s      0+    0 Qed        #0    9/25 peers     3 MiB db    7 KiB chain  0 bytes queue   10 KiB sync  RPC:  0 conn,  0 req/s,   0 µs
2017-09-26 00:38:35 UTC Failed to initialize snapshot restoration: IO error: lock data/chains/ethereum/db/906a34e69aec8c0d/snapshot/restoration/db/LOCK: No locks available
2017-09-26 00:38:36 UTC Encountered error during state restoration: IO error: data/chains/ethereum/db/906a34e69aec8c0d/snapshot/restoration/db/000156.log: No such file or directory
2017-09-26 00:38:38 UTC Syncing snapshot 0/361        #0    8/25 peers     3 MiB db    7 KiB chain  0 bytes queue   18 KiB sync  RPC:  0 conn,  0 req/s,   0 µs

@julian1
Copy link
Author

julian1 commented Sep 27, 2017

I set up two identical digital-ocean instances, with --pruning fast and --pruning-history 8. They look to have converged at 25GB after 24 hours. So, if I had started with just a bit more than the initial 23GB spare, it would have been OK.

This is more disk usage than I expected - based on information I could find in blogs or stack-overflow posts about running parity. But I am not sure it's actually unreasonable - unless there is some other reason to think it is out of line with expected behavior.

So, I am closing on this basis.

@julian1 julian1 closed this as completed Sep 27, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
F2-bug 🐞 The client fails to follow expected behavior. M4-core ⛓ Core client code / Rust. Z0-unconfirmed 🤔 Issue might be valid, but it’s not yet known.
Projects
None yet
Development

No branches or pull requests

3 participants