-
-
Notifications
You must be signed in to change notification settings - Fork 18.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
REGR: to_csv problems with zip compression and large dataframes #38714
Comments
Thanks @chmielcode for the report. The code sample failed on previous versions with
is there a combination of inferred/explicit compression and buffer type that worked previously and now fails? |
@simonjayhawkins Thank you for quick response. I only noticed this problem after upgrading to 1.2.0 when my data caching system started failing. No issues with 1.1.5. The same happens with string path as first argument and this is how I normally use this method. BytesIO in the example code was to make it as clean (no write to disk) as possible.
Output: Multiple files found in ZIP file. Only one file per ZIP: ['T:/test.csv.zip', 'T:/test.csv.zip'] Error message informs about 2 files for 1164-2188 rows, 3 files for 2189-3213 (+1/1024 rows). The larger the frame the more reported files in the zip archive. |
first bad commit: [3b88446] support binary file handles in to_csv (#35129) cc @twoertwein |
sorry about that, I will look into it! I assume that |
I made a PR. @chmielcode your initial example needs a call to import pandas as pd
import io
f = io.BytesIO()
d = pd.DataFrame({'a':[1]*5000})
d.to_csv(f, compression='zip')
f.seek(0)
pd.read_csv(f, compression='zip') |
@twoertwein Thank you very much. I've updated the example. It works without seek(0), but now it's clear that missing seek is not the cause. |
I was testing whether setting |
@chmielcode yes you are right, for zip compression you don't need a seek (it seems that |
Code Sample, a copy-pastable example
Problem description
Writing large (over 1163 rows) dataframes to csv with zip compression (inferred or explicit; to file or io.BytesIO) creates a corrupted zip file.
ValueError: Multiple files found in ZIP file. Only one file per ZIP: ['zip', 'zip', 'zip', 'zip', 'zip']
Output of
pd.show_versions()
INSTALLED VERSIONS
commit : 3e89b4c
python : 3.8.6.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19041
machine : AMD64
processor : Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : Polish_Poland.1250
pandas : 1.2.0
numpy : 1.19.3
pytz : 2020.5
dateutil : 2.8.1
pip : 20.3.3
setuptools : 51.1.0.post20201221
Cython : None
pytest : 6.2.1
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.19.0
pandas_datareader: None
bs4 : None
bottleneck : 1.3.2
fsspec : 0.8.5
fastparquet : None
gcsfs : None
matplotlib : 3.3.3
numexpr : 2.7.1
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : 1.5.4
sqlalchemy : None
tables : None
tabulate : None
xarray : 0.16.2
xlrd : None
xlwt : None
numba : 0.52.0
The text was updated successfully, but these errors were encountered: