-
-
Notifications
You must be signed in to change notification settings - Fork 18.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pd.concat doesn't preserve Categorical dtype when the categorical columns is missing in one of the DataFrames. #25412
Comments
Bug indeed, though uncertain if this is a complete duplicate... cc @jreback |
This can have severe memory consequences. |
That was exactly how I found that out... |
This appears to be related to #10409. pd.concat does not have the same behavior as DataFrame.merge, which can now handle combining categorical columns with different values in two dataframes. |
For reference, we get the same effect if the column is present in both dataframes, but the categories themselves are different: a = pd.DataFrame({'f1': [1,2,3], 'f2': pd.Series(['a', 'b', 'b']).astype('category')})
b = pd.DataFrame({'f1': [2,3,1], 'f2': pd.Series(['b', 'b', 'b']).astype('category')})
pd.concat([a,b]).dtypes
f1 int64
f2 object
dtype: object |
Looks to work on master now. Could use a test
|
hello, since append is deprecated, I've migrated all my Usually, I have processing where I do something like:
Here, since out is empty df at first, it will not keep dtypes from the temp df. For instance, if I have a datetime column, it's converted as object. Is that expected ? Considering append is deprecated this has huge impact. |
Problem description
(Similar to #14016, not sure if it's caused by the same bug or another one. feel free to merge)
When concatenating two DataFrames where one has a categorical column that the other is missing, the result contains the categorical column as a 'object' (losing the "real" dtype).
If we were to fill the missing column with Nones (but with the same categorical dtype), the concatenation would keep the dtype.
In the previous example, adding:
before concatenating, will solve the problem.
I believe if a field is missing from one of the merged dataframes, a reasonable behavior would be to copy it and preserve its dtype.
Expected Output
Column 'f2' should be a categorical (same as b['f2']).
Output of
pd.show_versions()
INSTALLED VERSIONS
commit: None
python: 3.6.5.final.0
python-bits: 64
OS: Darwin
OS-release: 18.2.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: en_US.UTF-8
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.23.0
pytest: None
pip: 10.0.1
setuptools: 39.0.1
Cython: None
numpy: 1.14.3
scipy: 1.1.0
pyarrow: None
xarray: None
IPython: 6.4.0
sphinx: None
patsy: None
dateutil: 2.7.3
pytz: 2018.5
blosc: None
bottleneck: None
tables: 3.4.4
numexpr: 2.6.9
feather: None
matplotlib: 2.0.2
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: 0.9999999
sqlalchemy: 1.1.13
pymysql: None
psycopg2: 2.7.3.2 (dt dec pq3 ext lo64)
jinja2: 2.9.4
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
The text was updated successfully, but these errors were encountered: