-
Notifications
You must be signed in to change notification settings - Fork 901
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"File too big" on s390x #2632
Comments
As you noted, it worked anyway. In 1.23 we changed it from a rejection of "too large" files, to simply reporting them, but trying anyway. It worries me that we're reaching above 220 megabytes for the compiler driver file, but I suppose it's not a specific problem. I'd suggest we should change the message so it's a warning, and warns that there's a very large file which may cause problems, rather than showing as an error. Would you agree with that? I suppose we could also just bump the max file size check a bit higher, but that feels like we'd simply be accepting that we continue to bloat up the compiler on some targets. |
The problem with an "error" is that I wasn't sure what happened. It seemed to work, but I had no idea what the consequences really were. I was trying to debug a regression on that target, so this raised extra doubt whether I had a working toolchain at all.
FWIW, a big part of that file is static LLVM, which may be dynamically linked on other targets. To be honest, while it does seem like a useful sanity check, I'm not sure that it belongs in rustup at all. The large file has already been built and distributed, out of the control of the rustup user. Instead, maybe this should be something we check and/or assert as a limit in rust CI instead. (cc @Mark-Simulacrum, @pietroalbini)
Even as a "warning", what would the user do about this? What sort of problems are you supposing? |
Unpacking is (currently) done by allocating enough RAM to unpack the file from the compressed tarball into RAM, and then to write it to disk in a thread. This permits interleaving of IOs which massively improves unpacking speed on many platforms. Unfortunately it means that severely RAM limited targets can, without the large-file check, end up OOMd which Rust's default allocator does not handle terribly gracefully (from a user perspective at least). Previously we simply refused to install in these circumstances, these days we permit the install to continue, but at least if the install crashes due to OOM we can see what caused it. We intend to switch such large elements to being streamed out (slower but low risk) but that work has yet to be done as there's essentially one part time rustup maintainer currently, and there's other somewhat higher priority problems to deal with :( I'd be very pleased if we were tracking the installation artifact sizes usefully in rust CI, because if we intend to increase support for targets such as raspberry pi etc. we'll need to be thinking more seriously about RAM footprint of the tooling. |
Concrete problems:
Users can fix some of these themselves of course. Longer term we'll introduce a parallel writing chunked IO driver (io_uring will help, but isn't sufficient because it only helps linux - we need to do async CloseHandle calls on windows - 'its complicated') - at which point rust can write as much data as it wants too and rustup won't care. |
A safer slow path sounds like the right call, so the fast path doesn't have to force limitations. |
We have such a safe path, see RUSTUP_IO_THREADS and RUSTUP_UNPACK_RAM in the config docs - https://rust-lang.github.io/rustup/environment-variables.html |
I think setting a limit during Rust's CI might be complicated, since we no longer package gzip'd tarballs in CI, recompressing at a later point - I think modern rustup is probably always using the xz tarballs though so for that purpose it may not matter. If just xz file size limits make things better I think that would be relatively easy to add. Would adding such a check allow us to ensure end users don't see this error? It sounds like it might be good either way to add some text to the error indicating what to do (e.g., filing an issue so we know).. |
@rbtcollins those are just knobs controlling the current behavior, right? I see a slow path as more of an automatic fallback, so you can zip along in the happy fast path for most of the process, but switch to slower streaming when a large file is encountered. |
Ah, no, the streaming implementation we have designed won't be slower, it will just be more memory efficient. I've made some headway on implementing it, but I get a few hours a month to hack on rustup, so its going to be another while it comes together. |
Fixes rust-lang#2632, rust-lang#2145, rust-lang#2564 Files over 16M are now written incrementally chunks rather than buffered in memory in one full linear buffer. This chunk size is not configurable. For threaded unpacking, the entire memory buffer will be used to buffer chunks and a single worker thread will dispatch IO operations from the buffer, so minimal performance impact should be anticipated (file size/16M round trips at worst, and most network file systems will latency hide linear writes). For immediate unpacking, each chunk is dispatched directly to disk, which may impact performance as less latency hiding is possible - but for immediate unpacking clarity of behaviour is the priority.
Fixes rust-lang#2632, rust-lang#2145, rust-lang#2564 Files over 16M are now written incrementally chunks rather than buffered in memory in one full linear buffer. This chunk size is not configurable. For threaded unpacking, the entire memory buffer will be used to buffer chunks and a single worker thread will dispatch IO operations from the buffer, so minimal performance impact should be anticipated (file size/16M round trips at worst, and most network file systems will latency hide linear writes). For immediate unpacking, each chunk is dispatched directly to disk, which may impact performance as less latency hiding is possible - but for immediate unpacking clarity of behaviour is the priority.
Fixes rust-lang#2632, rust-lang#2145, rust-lang#2564 Files over 16M are now written incrementally chunks rather than buffered in memory in one full linear buffer. This chunk size is not configurable. For threaded unpacking, the entire memory buffer will be used to buffer chunks and a single worker thread will dispatch IO operations from the buffer, so minimal performance impact should be anticipated (file size/16M round trips at worst, and most network file systems will latency hide linear writes). For immediate unpacking, each chunk is dispatched directly to disk, which may impact performance as less latency hiding is possible - but for immediate unpacking clarity of behaviour is the priority.
Fixes rust-lang#2632, rust-lang#2145, rust-lang#2564 Files over 16M are now written incrementally chunks rather than buffered in memory in one full linear buffer. This chunk size is not configurable. For threaded unpacking, the entire memory buffer will be used to buffer chunks and a single worker thread will dispatch IO operations from the buffer, so minimal performance impact should be anticipated (file size/16M round trips at worst, and most network file systems will latency hide linear writes). For immediate unpacking, each chunk is dispatched directly to disk, which may impact performance as less latency hiding is possible - but for immediate unpacking clarity of behaviour is the priority.
Problem
On
s390x-unknown-linux-gnu
, I saw an error while installing:It seems to work fine anyway.
Steps
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Possible Solution(s)
Increase
MAX_FILE_SIZE
? It seems #2363 was the most recent time...Notes
Output of
rustup --version
:Output of
rustup show
:Every one of those nightlies gave a similar error on
librustc_driver
.The text was updated successfully, but these errors were encountered: