-
-
Notifications
You must be signed in to change notification settings - Fork 14.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nix-copy-closure invoked by nixops runs out of memory (OOM) on low mem systems #38808
Comments
NixOS/nix#1681 related? |
If |
This is probably caused by the upgrade to nix 2.0, and its increased memory consumption on path imports in general (not just those performed by nix copy). |
Where can I find the discussion on how the huge overhead is warranted? Better yet, please point me towards a nix-daemon config option that limits the threads or memory during massive parallel imports. |
There is no good reason for this AFAIK. Comments on NixOS/nix#1681 reference a couple of commits that should fix this, they're just not in 2.0. |
I'm using this to boot my VMs initially into 17.09 instead, which has the old nix which doesn't have memory problems: NixOS/nix#1988 (comment) That way I can use nixops again against AWS, at least for the initial deployment (my deployment then itself puts nix 2.0 on it which still has memory issues but at least that one you can patch easily). Right now I'm still having problems with |
Fixes `error: out of memory` of `nix-store --serve --write` when receiving packages via SSH (and perhaps other sources). See NixOS#1681 NixOS#1969 NixOS#1988 NixOS/nixpkgs#38808. Performance improvement on `nix-store --import` of a 2.2 GB cudatoolkit closure: When the store path already exists: Before: 10.82user 2.66system 0:20.14elapsed 66%CPU (0avgtext+0avgdata 12556maxresident)k After: 11.43user 2.94system 0:16.71elapsed 86%CPU (0avgtext+0avgdata 4204664maxresident)k When the store path doesn't yet exist (after `nix-store --delete`): Before: 11.15user 2.09system 0:13.26elapsed 99%CPU (0avgtext+0avgdata 4204732maxresident)k After: 5.27user 1.48system 0:06.80elapsed 99%CPU (0avgtext+0avgdata 12032maxresident)k The reduction is 4200 MB -> 12 MB RAM usage, and it also takes less time.
Here's a nix PR that works for my use case: NixOS/nix#2206 With convenience to try the patch out: NixOS/nix#2206 (comment) |
I have backported @edolstra's memory fixes to Nix NixOS/nix@2.0.4...nh2:nh2-2.0.4-issue-1681-cherry-pick Note this fixes the case where the machine that's running |
@elitak Do you still observe issues with nix 2.2? With the above backport to Related: Asking whether people still observe problems in NixOS/nix#1681 (comment) |
Thank you for your contributions. This has been automatically marked as stale because it has had no activity for 180 days. If this is still important to you, we ask that you leave a comment below. Your comment can be as simple as "still important to me". This lets people see that at least one person still cares about this. Someone will have to do this at most twice a year if there is no other activity. Here are suggestions that might help resolve this more quickly:
|
This is no longer the case with 20.03. |
Issue description
When I run
nixops deploy
, some of my smaller systems (1GB ram VPSes) fail because thenix-copy-closure
step runs out of memory. I've monitored the process and it indeed does consume over a GB while working. Adding some swap lets me work around the issue.I didn't have this problem ever, using the master branch a month or two ago. Why daes
nix-copy-closure
suddenly consume so much memory? Why does it do so without considering maybe backing off or having a setting to limit its allocations. Is this a simple unintended regression? I can't understand why it would even need this much; it should be buffering compressed streams to disk, then unpacking them. That takes almost no memory.Steps to reproduce
nix-copy-closure --use-substitutes
onto itTechnical details
"x86_64-linux"
Linux 4.14.28, NixOS, 18.09.git.50dad060420 (Jellyfish)
yes
no
nix-env (Nix) 2.0
The text was updated successfully, but these errors were encountered: