Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase chunk size for speeding up file downloads #5501

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

Narsil
Copy link
Contributor

@Narsil Narsil commented Feb 3, 2023

Original fix: huggingface/huggingface_hub#1267
Not sure this function is actually still called though.

I haven't done benches on this. Is there a dataset where files are hosted on the hub through cloudfront so we can have the same setup as in hf_hub ?

Original fix: huggingface/huggingface_hub#1267
Not sure this function is actually still called though.
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

@github-actions
Copy link

github-actions bot commented Feb 3, 2023

Show benchmarks

PyArrow==6.0.0

Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.008407 / 0.011353 (-0.002946) 0.004651 / 0.011008 (-0.006357) 0.100367 / 0.038508 (0.061859) 0.029107 / 0.023109 (0.005998) 0.302798 / 0.275898 (0.026900) 0.354379 / 0.323480 (0.030899) 0.006985 / 0.007986 (-0.001001) 0.003365 / 0.004328 (-0.000963) 0.078312 / 0.004250 (0.074062) 0.034205 / 0.037052 (-0.002847) 0.310431 / 0.258489 (0.051941) 0.346239 / 0.293841 (0.052398) 0.033800 / 0.128546 (-0.094747) 0.011515 / 0.075646 (-0.064131) 0.323588 / 0.419271 (-0.095684) 0.040766 / 0.043533 (-0.002767) 0.300914 / 0.255139 (0.045775) 0.332983 / 0.283200 (0.049784) 0.087500 / 0.141683 (-0.054182) 1.469505 / 1.452155 (0.017350) 1.505119 / 1.492716 (0.012403)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.187319 / 0.018006 (0.169313) 0.405498 / 0.000490 (0.405008) 0.001000 / 0.000200 (0.000800) 0.000069 / 0.000054 (0.000015)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.022583 / 0.037411 (-0.014828) 0.098096 / 0.014526 (0.083570) 0.104272 / 0.176557 (-0.072284) 0.142801 / 0.737135 (-0.594335) 0.109749 / 0.296338 (-0.186590)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.423343 / 0.215209 (0.208134) 4.215116 / 2.077655 (2.137461) 1.899714 / 1.504120 (0.395594) 1.689579 / 1.541195 (0.148384) 1.710292 / 1.468490 (0.241801) 0.690976 / 4.584777 (-3.893801) 3.432501 / 3.745712 (-0.313212) 1.899600 / 5.269862 (-3.370261) 1.279801 / 4.565676 (-3.285876) 0.082763 / 0.424275 (-0.341512) 0.012545 / 0.007607 (0.004938) 0.531381 / 0.226044 (0.305336) 5.320077 / 2.268929 (3.051148) 2.370705 / 55.444624 (-53.073919) 2.007089 / 6.876477 (-4.869388) 2.062412 / 2.142072 (-0.079661) 0.814998 / 4.805227 (-3.990229) 0.149822 / 6.500664 (-6.350842) 0.064399 / 0.075469 (-0.011070)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.226196 / 1.841788 (-0.615591) 13.823443 / 8.074308 (5.749134) 13.813667 / 10.191392 (3.622275) 0.161289 / 0.680424 (-0.519135) 0.028569 / 0.534201 (-0.505632) 0.390360 / 0.579283 (-0.188923) 0.396217 / 0.434364 (-0.038147) 0.483120 / 0.540337 (-0.057217) 0.570041 / 1.386936 (-0.816895)
PyArrow==latest
Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.006422 / 0.011353 (-0.004931) 0.004528 / 0.011008 (-0.006481) 0.076043 / 0.038508 (0.037535) 0.027631 / 0.023109 (0.004522) 0.340622 / 0.275898 (0.064724) 0.376694 / 0.323480 (0.053214) 0.004993 / 0.007986 (-0.002992) 0.003403 / 0.004328 (-0.000926) 0.074521 / 0.004250 (0.070270) 0.037568 / 0.037052 (0.000516) 0.343423 / 0.258489 (0.084934) 0.387729 / 0.293841 (0.093888) 0.031790 / 0.128546 (-0.096757) 0.011767 / 0.075646 (-0.063879) 0.085182 / 0.419271 (-0.334090) 0.042867 / 0.043533 (-0.000666) 0.341269 / 0.255139 (0.086130) 0.368460 / 0.283200 (0.085261) 0.090153 / 0.141683 (-0.051530) 1.536490 / 1.452155 (0.084335) 1.596403 / 1.492716 (0.103686)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.222373 / 0.018006 (0.204367) 0.396145 / 0.000490 (0.395655) 0.000384 / 0.000200 (0.000184) 0.000062 / 0.000054 (0.000008)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.024801 / 0.037411 (-0.012610) 0.099711 / 0.014526 (0.085185) 0.106094 / 0.176557 (-0.070463) 0.147819 / 0.737135 (-0.589316) 0.110065 / 0.296338 (-0.186274)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.442863 / 0.215209 (0.227654) 4.420043 / 2.077655 (2.342388) 2.070136 / 1.504120 (0.566016) 1.862363 / 1.541195 (0.321168) 1.910890 / 1.468490 (0.442400) 0.702570 / 4.584777 (-3.882207) 3.435855 / 3.745712 (-0.309857) 1.871290 / 5.269862 (-3.398572) 1.169321 / 4.565676 (-3.396355) 0.083674 / 0.424275 (-0.340601) 0.012823 / 0.007607 (0.005216) 0.539330 / 0.226044 (0.313285) 5.403317 / 2.268929 (3.134389) 2.536508 / 55.444624 (-52.908117) 2.179629 / 6.876477 (-4.696847) 2.207586 / 2.142072 (0.065514) 0.812256 / 4.805227 (-3.992972) 0.152915 / 6.500664 (-6.347749) 0.068431 / 0.075469 (-0.007038)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.294982 / 1.841788 (-0.546806) 13.912811 / 8.074308 (5.838503) 13.415658 / 10.191392 (3.224266) 0.149531 / 0.680424 (-0.530893) 0.016785 / 0.534201 (-0.517416) 0.381055 / 0.579283 (-0.198228) 0.392084 / 0.434364 (-0.042280) 0.472614 / 0.540337 (-0.067724) 0.559799 / 1.386936 (-0.827137)

@Narsil Narsil requested a review from lhoestq February 3, 2023 11:16
@lhoestq
Copy link
Member

lhoestq commented Feb 3, 2023

We simply do GET requests to hf.co to download files from the Hub right now. We may switch to hfh when we update how we do caching

You can try on any dataset hosted on the hub like imagenet-1k

Copy link
Member

@albertvillanova albertvillanova left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the improvement, @Narsil.

Just a question below.

@@ -377,7 +377,7 @@ def http_get(
desc=desc or "Downloading",
disable=not logging.is_progress_bar_enabled(),
) as progress:
for chunk in response.iter_content(chunk_size=1024):
for chunk in response.iter_content(chunk_size=1024 * 1024):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be aligned with huggingface_hub, shouldn't it be 10 MB instead?

Suggested change
for chunk in response.iter_content(chunk_size=1024 * 1024):
for chunk in response.iter_content(chunk_size=10 * 1024 * 1024):

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could. In my experiements it made only tiny improvements over 1Mb but you're right let's keep consistent!

I would like to run benchmarks on this too, before claiming victory, however I'm short on time this week :(

Copy link
Contributor Author

@Narsil Narsil Feb 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To do benchmarks, use a machine that has good network (> 50Mo) otherwise you don't see anything.

On DGX or large AWS instances, I was able to get ~500Mo/s of transfer speed where it was stuck at ~50Mo/s before.

For larger networks, it's impossible to do in Python afaik (but there are still ways to achieve it with hf_transfer but this is very much non official, since it's rare to get such bandwidth)

@albertvillanova albertvillanova changed the title Speeding up file downloads Increase chunk size for speeding up file downloads Feb 9, 2023
Co-authored-by: Albert Villanova del Moral <[email protected]>
@github-actions
Copy link

github-actions bot commented Feb 9, 2023

Show benchmarks

PyArrow==6.0.0

Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.010931 / 0.011353 (-0.000422) 0.005730 / 0.011008 (-0.005278) 0.116653 / 0.038508 (0.078145) 0.041439 / 0.023109 (0.018330) 0.359559 / 0.275898 (0.083661) 0.408398 / 0.323480 (0.084918) 0.009193 / 0.007986 (0.001208) 0.006024 / 0.004328 (0.001695) 0.087743 / 0.004250 (0.083492) 0.048636 / 0.037052 (0.011584) 0.363133 / 0.258489 (0.104643) 0.407144 / 0.293841 (0.113303) 0.044610 / 0.128546 (-0.083936) 0.014075 / 0.075646 (-0.061571) 0.396506 / 0.419271 (-0.022766) 0.057014 / 0.043533 (0.013482) 0.358254 / 0.255139 (0.103115) 0.399887 / 0.283200 (0.116687) 0.115337 / 0.141683 (-0.026346) 1.731655 / 1.452155 (0.279500) 1.813276 / 1.492716 (0.320560)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.210197 / 0.018006 (0.192191) 0.475887 / 0.000490 (0.475397) 0.003323 / 0.000200 (0.003123) 0.000100 / 0.000054 (0.000045)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.031686 / 0.037411 (-0.005725) 0.131167 / 0.014526 (0.116641) 0.137919 / 0.176557 (-0.038637) 0.184843 / 0.737135 (-0.552293) 0.144998 / 0.296338 (-0.151340)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.471371 / 0.215209 (0.256162) 4.693739 / 2.077655 (2.616084) 2.251567 / 1.504120 (0.747447) 1.993653 / 1.541195 (0.452458) 2.053236 / 1.468490 (0.584746) 0.809226 / 4.584777 (-3.775551) 4.494120 / 3.745712 (0.748408) 2.436921 / 5.269862 (-2.832940) 1.541973 / 4.565676 (-3.023704) 0.098401 / 0.424275 (-0.325874) 0.014329 / 0.007607 (0.006722) 0.597813 / 0.226044 (0.371769) 5.964035 / 2.268929 (3.695107) 2.709283 / 55.444624 (-52.735341) 2.323537 / 6.876477 (-4.552940) 2.401707 / 2.142072 (0.259635) 0.976379 / 4.805227 (-3.828848) 0.194638 / 6.500664 (-6.306026) 0.076904 / 0.075469 (0.001435)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.516877 / 1.841788 (-0.324911) 18.228010 / 8.074308 (10.153702) 16.631750 / 10.191392 (6.440358) 0.176030 / 0.680424 (-0.504394) 0.033769 / 0.534201 (-0.500432) 0.520511 / 0.579283 (-0.058773) 0.531764 / 0.434364 (0.097400) 0.648658 / 0.540337 (0.108321) 0.779124 / 1.386936 (-0.607812)
PyArrow==latest
Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.008635 / 0.011353 (-0.002718) 0.005785 / 0.011008 (-0.005223) 0.087042 / 0.038508 (0.048534) 0.039632 / 0.023109 (0.016523) 0.419719 / 0.275898 (0.143821) 0.463860 / 0.323480 (0.140380) 0.006621 / 0.007986 (-0.001364) 0.004655 / 0.004328 (0.000327) 0.087003 / 0.004250 (0.082753) 0.057122 / 0.037052 (0.020069) 0.417820 / 0.258489 (0.159331) 0.485981 / 0.293841 (0.192140) 0.042606 / 0.128546 (-0.085940) 0.014369 / 0.075646 (-0.061278) 0.101939 / 0.419271 (-0.317333) 0.058303 / 0.043533 (0.014770) 0.415053 / 0.255139 (0.159914) 0.439914 / 0.283200 (0.156714) 0.134628 / 0.141683 (-0.007055) 1.765464 / 1.452155 (0.313309) 1.843963 / 1.492716 (0.351247)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.307156 / 0.018006 (0.289150) 0.476657 / 0.000490 (0.476167) 0.019718 / 0.000200 (0.019518) 0.000160 / 0.000054 (0.000105)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.035286 / 0.037411 (-0.002125) 0.138094 / 0.014526 (0.123568) 0.144768 / 0.176557 (-0.031789) 0.191386 / 0.737135 (-0.545750) 0.151988 / 0.296338 (-0.144350)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.504733 / 0.215209 (0.289523) 5.027048 / 2.077655 (2.949394) 2.441571 / 1.504120 (0.937451) 2.198242 / 1.541195 (0.657047) 2.298473 / 1.468490 (0.829983) 0.848048 / 4.584777 (-3.736729) 4.613102 / 3.745712 (0.867390) 2.522824 / 5.269862 (-2.747037) 1.610159 / 4.565676 (-2.955517) 0.105197 / 0.424275 (-0.319078) 0.015195 / 0.007607 (0.007588) 0.626976 / 0.226044 (0.400932) 6.268459 / 2.268929 (3.999530) 3.014387 / 55.444624 (-52.430237) 2.554102 / 6.876477 (-4.322375) 2.656051 / 2.142072 (0.513979) 1.027978 / 4.805227 (-3.777249) 0.200686 / 6.500664 (-6.299978) 0.077104 / 0.075469 (0.001635)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.485228 / 1.841788 (-0.356560) 18.319949 / 8.074308 (10.245641) 15.855739 / 10.191392 (5.664347) 0.204365 / 0.680424 (-0.476059) 0.023824 / 0.534201 (-0.510377) 0.505000 / 0.579283 (-0.074283) 0.502866 / 0.434364 (0.068502) 0.629574 / 0.540337 (0.089237) 0.746602 / 1.386936 (-0.640334)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants