Skip to content

pipelined extraction #236

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 33 commits into
base: master
Choose a base branch
from

Conversation

cosmicexplorer
Copy link
Contributor

@cosmicexplorer cosmicexplorer commented Aug 17, 2024

Recreation of #208 to work around github issues.

Problem

ZipArchive::extract() corresponds to the way most zip implementations perform the task, but it's single-threaded. This is appropriate under the assumptions imposed by rust's Read and Seek traits, where mutable access is necessary and only one reader can extract file contents at a time, but most unix-like operating systems offer a pread() operation which avoids mutating OS state like the file offset, so multiple threads can read from a file handle at once. The go programming language offers io.ReaderAt in the stdlib to codify this ability.

Solution

This is a rework of #72 which avoids introducing unnecessary thread pools and creates all output file handles and containing directories up front. For large zips, we want to:

  • create output handles and containing directories up front,
  • split the input file handle into chunks to process the constituent file entries in parallel,
  • for large compressed entries, pipe their content into a dedicated stream to avoid intermixing i/o and decompression and blocking quick small entries later in the file.

src/read/split.rs was created to cover pread() and other operations, while src/read/pipelining.rs was created to perform the high-level logic to split up entries and perform pipelined extraction.

Result

  • The parallelism feature was added to the crate to gate the newly added code + API.
  • A dependency on the libc crate was added for #[cfg(all(unix, feature = "parallelism"))] in order to make use of OS-specific functionality.
  • zip::read::split_extract() was added as a new external API to extract &ZipArchive<fs::File> when #[cfg(all(unix, feature = "parallelism"))].

Note that this does not handle symlinks yet, which I plan to add in a followup PR.

CURRENT BENCHMARK STATUS

On a linux host (with splice() and optionally copy_file_range()), we get about a 6.5x speedup with 12 decompression threads:

> cargo bench --features parallelism -- extract
running 2 tests
test extract_basic           ... bench: 104,389,978 ns/iter (+/- 5,715,453) = 85 MB/s
test extract_split           ... bench:  16,274,974 ns/iter (+/- 1,530,257) = 546 MB/s

The performance should keep increasing as we increase thread count, up to the number of available CPU cores (this was running with a parallelism of 12 on my 16-core laptop). This also works on macOS and BSDs, and other #[cfg(unix)] platforms.

@cosmicexplorer cosmicexplorer force-pushed the pipelined-extract-v2 branch 4 times, most recently from 7a45b32 to 5cec332 Compare August 21, 2024 04:21
@cosmicexplorer
Copy link
Contributor Author

Going to try to get this one in before figuring out the cli PR.

- initial sketch of lexicographic trie for pipelining
- move path splitting into a submodule
- lex trie can now propagate entry data
- outline handle allocation
- mostly handle files
- mostly handle dirs
- clarify symlink FIXMEs
- do symlink validation
- extract writable dir setting to helper method
- modify args to handle allocation method
- handle allocation test passes
- simplify perms a lot
- outline evaluation
- handle symlinks
- BIGGER CHANGE! add EntryReader/etc
- make initial pipelined extract work
- fix file perms by writing them after finishing the file write
- support directory entries by unix mode as well
- impl split extraction
- remove dependency on reader refactoring
- add dead_code to methods we don't use yet
@cosmicexplorer cosmicexplorer force-pushed the pipelined-extract-v2 branch 4 times, most recently from b90c9e2 to fa18aa3 Compare January 16, 2025 21:23
@cosmicexplorer cosmicexplorer marked this pull request as ready for review January 17, 2025 00:48
Copy link
Member

@Pr0methean Pr0methean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's a review of what I've read so far. Still needs a fair bit of work, but I'm happy with the overall concept.

#[derive(PartialEq, Eq, Debug, Clone)]
pub(crate) struct DirEntry<'a, Data> {
pub properties: Option<Data>,
pub children: BTreeMap<&'a str, Box<FSEntry<'a, Data>>>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we were to Box the BTreeMap instead of its individual entries, what difference would that make to pointer chasing and heap fragmentation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this data structure only ever exists temporarily, and is freed by transform_entries_to_allocated_handles(), which runs to completion before we engage in any parallelism. The reason for this trie is purely to deduplicate subdirectories to create and to ensure uniqueness of output file handles.

...however, after trying out this change, I think I prefer this approach anyway. I'm under the impression that pointer chasing and heap fragmentation are less of an issue given that this is a temporary data structure, but this seems to make more sense to me.

Let me know if I'm correct in thinking that the temporary nature of this data structure means we can avoid analyzing heap fragmentation too deeply!

Copy link
Member

@Pr0methean Pr0methean Mar 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's fine, as long as we're not interleaving expansions of it with expansions of anything longer-lasting. It's when the fragmented structure is dropped, but other objects in between its elements don't, that fragmentation can be a problem.

pub file_range_copy_buffer_length: usize,
/// Size of buffer used to splice contents from a pipe into an output file handle.
///
/// Used on non-Linux platforms without [`splice()`](https://en.wikipedia.org/wiki/Splice_(system_call)).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This buffer isn't necessary on any Unix; see https://stackoverflow.com/a/10330172.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand your meaning here. That answer seems to say that on non-linux platforms, read()/write() with an explicit buffer (as we do here) is the way to go. Our PipeReadBufferSplicer struct performs read() then pwrite_all() with an explicit buffer, because we can't use splice().

Do I misunderstand you here?

Copy link
Member

@Pr0methean Pr0methean Mar 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What it's saying is that on Unix, when you don't have splice() you should memmap() the file directly and pass the mapped region to write(). The memmap2 crate will provide the wrapper we need.

@cosmicexplorer
Copy link
Contributor Author

cosmicexplorer commented Feb 5, 2025

thank you so much for these wonderful comments!!

- initialize the test archives exactly once in statics
- add benchmarks for dynamic and static test data
- use lazy_static
@cosmicexplorer
Copy link
Contributor Author

Was able to remove the non_local_definitions lint ignore after updating displaydoc to 0.2.5!

this may technically reduce heap fragmentation, but since this data structure only exists
temporarily, that's probably not too important. instead, this change just reduces the amount of
coercion and unboxing we need to do
@cosmicexplorer
Copy link
Contributor Author

Hey @Pr0methean -- think I got to all of your comments! I proposed a couple compromises to do in follow-up PRs (supporting absolute extraction paths and symlinks)--let me know if you agree! I am hoping to spend more time on this in the next few weeks to get it in and then do the follow-ups. No rush as usual, and I really appreciate your comments.


let params = ExtractionParameters {
decompression_threads: DECOMPRESSION_THREADS,
decompression_threads: num_cpus::get() / 3,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What will the other 2/3 of the CPUs be doing? Also, does this need to be clamped to at least 1?

Copy link
Member

@Pr0methean Pr0methean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good; just 2 minor comments.

let block = Self::from_le(block);
/// Convert endianness and check the magic value.
#[allow(clippy::wrong_self_convention)]
fn validate(self) -> ZipResult<Self> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Call this function from_le_validated to make its combined functionality more clear, or separate out the from_le call and call it with_checked_magic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants