-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nomad copy / to alloc directory #1478
Comments
Can you paste the output of: |
I'm using vagrant(ubuntu/trusty64), here is the nomad output.
The job is scheduled to machine docker1, here is the alloc info.
|
If you look here you will see the set of directories we add to the chroot: https://www.nomadproject.io/docs/drivers/exec.html. We also map It looks normal to me! Let me know if you'd like to reopen! |
We're running into this issue too, I think. It looks like all of I'm completely new to Nomad, so I'm probably misunderstanding something here, because I don't really understand how this can scale. On our machines |
Hey Theo,
I would suggest increasing the disk in the meantime. An upcoming release will add client side garbage collection so that the chroots are removed as there is disk pressure. Right now it happens on a 4hr static interval.
We may play around with other approaches to building a chroot such as overlay filesystems to reduce the overhead but having a flat filesystem brings performance benefits. Another alternative we plan to look into as time allows is to support FS drivers and use ZFS. This brings the best of both worlds!
Thanks,
Alex
…On Nov 29, 2016, 6:11 AM -0800, Theo Hultberg ***@***.***>, wrote:
We're running into this issue too, I think. It looks like all of /usr, /etc and all the other directories in the chroot list you linked to are copied into each allocation.
I'm completely new to Nomad, so I'm probably misunderstanding something here, because I don't really understand how this can scale. On our machines /usr is 1 GB and the disk is 30 GB. After 30 allocations the disk will be full. If update a service 30 times the disk will be full, or if you run batch jobs a machine is basically unusable after 30 jobs. Is this really the way it's supposed to work?
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub, or mute the thread.
|
@dadgar increasing the disk is not an option unfortunately, we're on EC2 but we don't use EBS (expensive, unnecessary, slow, risky, etc.) so we'll have to figure something out, or wait for the fix. I'll see if I can get some time to write up a PR for the documentation, because I was very surprised by this behaviour. Now when I know about it I can kind of see how the docs don't say that it works otherwise, but they also require you to extrapolate from a sentence or two that you will need very big disks to use the exec driver. |
@iconara Appreciate it. The fix is being worked on as we speak so you won't have to wait to long! |
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. |
When I run nomad with this job, it copy all files to alloc folder.
My nomad version is 0.4.0
The text was updated successfully, but these errors were encountered: