Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: better dtype inference when doing DataFrame reductions #52788

Merged
merged 79 commits into from
Jul 13, 2023
Merged
Show file tree
Hide file tree
Changes from 64 commits
Commits
Show all changes
79 commits
Select commit Hold shift + click to select a range
1e7e563
ENH: better dtype inference when doing DataFrame reductions
topper-123 Apr 19, 2023
6397977
precommit issues
topper-123 Apr 19, 2023
0e797b9
fix failures
topper-123 Apr 19, 2023
b846e70
fix failures
topper-123 Apr 19, 2023
76ce594
mypy + some docs
topper-123 Apr 20, 2023
7644598
doc linting linting
topper-123 Apr 20, 2023
51da9ef
refactor to use _reduce_with_wrap
topper-123 Apr 20, 2023
8d925cd
docstring linting
topper-123 Apr 20, 2023
d7d1989
pyarrow failure + linting
topper-123 Apr 20, 2023
54bcb60
pyarrow failure + linting
topper-123 Apr 20, 2023
03b8ce4
linting
topper-123 Apr 20, 2023
e0af36f
doc stuff
topper-123 Apr 20, 2023
64d8d60
linting fixes
topper-123 Apr 21, 2023
a95e5b9
fix fix doc string
topper-123 Apr 22, 2023
e7a75e4
remove _wrap_na_result
topper-123 Apr 22, 2023
2e64191
doc string example
topper-123 Apr 23, 2023
b6c1dc8
pyarrow + categorical
topper-123 Apr 24, 2023
32f9a73
silence bugs
topper-123 Apr 25, 2023
8bf7ba8
silence errors
topper-123 Apr 25, 2023
35b07c5
silence errors II
topper-123 Apr 25, 2023
6a390d4
fix errors III
topper-123 Apr 25, 2023
8dc2acf
various fixups
topper-123 Apr 25, 2023
5a65c70
various fixups
topper-123 Apr 25, 2023
9cb34ec
delay fixing windows and 32bit failures
topper-123 Apr 26, 2023
8521f18
BUG: Adding a columns to a Frame with RangeIndex columns using a non-…
topper-123 Apr 23, 2023
82cd91e
DOC: Update whatsnew (#52882)
phofl Apr 23, 2023
e0bc63e
CI: Change development python version to 3.10 (#51133)
phofl Apr 26, 2023
7cf26ae
update
topper-123 Apr 27, 2023
6330840
update
topper-123 Apr 29, 2023
efae9dc
add docs
topper-123 May 1, 2023
b585f3b
fix windows tests
topper-123 May 1, 2023
52763ab
fix windows tests
topper-123 May 1, 2023
d4f2a84
remove guards for 32bit linux
topper-123 May 2, 2023
7bfe3fe
add bool tests + fix 32-bit failures
topper-123 May 2, 2023
f48ea09
fix pre-commit failures
topper-123 May 2, 2023
bbd8cb8
fix mypy failures
topper-123 May 2, 2023
c6e9a80
rename _reduce_with -> _reduce_and_wrap
topper-123 May 2, 2023
5200896
assert missing attributes
topper-123 May 2, 2023
26d4059
reduction dtypes on windows and 32bit systems
topper-123 May 3, 2023
b6bd75e
add tests for min_count=0
topper-123 May 3, 2023
44dcdce
PERF:median with axis=1
topper-123 May 4, 2023
3ebcbff
median with axis=1 fix
topper-123 May 4, 2023
99d034e
streamline Block.reduce
topper-123 May 5, 2023
79df9db
fix comments
topper-123 May 6, 2023
d01fc1d
FIX preserve dtype with datetime columns of different resolution when…
glemaitre May 14, 2023
bc582f6
BUG Merge not behaving correctly when having `MultiIndex` with a sing…
Charlie-XIAO May 16, 2023
a7fd1b1
BUG: preserve dtype for right/outer merge of datetime with different …
jorisvandenbossche May 17, 2023
1781d30
remove special BooleanArray.sum method
topper-123 May 22, 2023
68fd316
remove BooleanArray.prod
topper-123 May 23, 2023
8ceb57d
fixes
topper-123 May 27, 2023
4375cb2
Update doc/source/whatsnew/v2.1.0.rst
topper-123 May 29, 2023
f7b354c
Update pandas/core/array_algos/masked_reductions.py
topper-123 May 29, 2023
f91c6ca
small cleanup
topper-123 May 29, 2023
9a881fa
small cleanup
topper-123 May 29, 2023
9d50f85
Merge branch 'master' into reduction_dtypes_II
topper-123 May 31, 2023
026696f
Merge branch 'master' into reduction_dtypes_II
topper-123 May 31, 2023
f603de0
only reduce 1d
topper-123 May 31, 2023
a7e69ad
Merge branch 'reduction_dtypes_II' of https://github.com/topper-123/p…
topper-123 May 31, 2023
772998f
fix after #53418
topper-123 May 31, 2023
b20a289
Merge branch 'master' into reduction_dtypes_II
topper-123 Jun 1, 2023
082ddd9
update according to comments
topper-123 Jun 3, 2023
8032514
revome note
topper-123 Jun 3, 2023
3a3ec95
update _minmax
topper-123 Jun 5, 2023
77992f7
Merge branch 'master' into reduction_dtypes_II
topper-123 Jun 5, 2023
23f22fb
Merge branch 'master' into reduction_dtypes_II
topper-123 Jun 10, 2023
3b8d8f0
Merge branch 'master' into reduction_dtypes_II
topper-123 Jun 10, 2023
1e39b65
Merge branch 'master' into reduction_dtypes_II
topper-123 Jun 19, 2023
1ed3e2d
Merge branch 'master' into reduction_dtypes_II
topper-123 Jun 24, 2023
467073a
Merge branch 'master' into reduction_dtypes_II
topper-123 Jun 27, 2023
dd0bfe8
Merge branch 'master' into reduction_dtypes_II
topper-123 Jun 29, 2023
49334c7
REF: add keepdims parameter to ExtensionArray._reduce + remove Extens…
topper-123 Jun 29, 2023
5634106
REF: add keepdims parameter to ExtensionArray._reduce + remove Extens…
topper-123 Jun 29, 2023
f85deab
fix whatsnew
topper-123 Jun 29, 2023
6519712
fix _reduce call
topper-123 Jun 29, 2023
74410f6
Merge branch 'master' into reduction_dtypes_II
topper-123 Jul 7, 2023
e7503dc
Merge branch 'master' into reduction_dtypes_II
topper-123 Jul 12, 2023
24e2d11
Merge branch 'master' into reduction_dtypes_II
topper-123 Jul 12, 2023
e3afa18
simplify test
topper-123 Jul 12, 2023
899a2fb
add tests for any/all
topper-123 Jul 13, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions doc/source/reference/extensions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ objects.
api.extensions.ExtensionArray._from_sequence_of_strings
api.extensions.ExtensionArray._hash_pandas_object
api.extensions.ExtensionArray._reduce
api.extensions.ExtensionArray._reduce_and_wrap
api.extensions.ExtensionArray._values_for_argsort
api.extensions.ExtensionArray._values_for_factorize
api.extensions.ExtensionArray.argsort
Expand Down
3 changes: 2 additions & 1 deletion doc/source/user_guide/integer_na.rst
Original file line number Diff line number Diff line change
Expand Up @@ -126,10 +126,11 @@ These dtypes can be merged, reshaped & casted.
pd.concat([df[["A"]], df[["B", "C"]]], axis=1).dtypes
df["A"].astype(float)

Reduction and groupby operations such as 'sum' work as well.
Reduction and groupby operations such as :meth:`~DataFrame.sum` work as well.

.. ipython:: python

df.sum(numeric_only=True)
df.sum()
df.groupby("B").A.sum()

Expand Down
39 changes: 36 additions & 3 deletions doc/source/whatsnew/v2.1.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,43 @@ including other versions of pandas.
Enhancements
~~~~~~~~~~~~

.. _whatsnew_210.enhancements.enhancement1:
.. _whatsnew_210.enhancements.reduction_extension_dtypes:

enhancement1
^^^^^^^^^^^^
DataFrame reductions preserve extension dtypes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

In previous versions of pandas, the results of DataFrame reductions
(:meth:`DataFrame.sum` :meth:`DataFrame.mean` etc.) has numpy dtypes even when the DataFrames
were of extension dtypes. Pandas can now keep the dtypes when doing reductions over Dataframe
columns with a common dtype (:issue:`52788`).

*Old Behavior*

.. code-block:: ipython

In [1]: df = pd.DataFrame({"a": [1, 1, 2, 1], "b": [np.nan, 2.0, 3.0, 4.0]}, dtype="Int64")
In [2]: df.sum()
Out[2]:
a 5
b 9
dtype: int64
In [3]: df = df.astype("int64[pyarrow]")
In [4]: df.sum()
Out[4]:
a 5
b 9
dtype: int64

*New Behavior*

.. ipython:: python

df = pd.DataFrame({"a": [1, 1, 2, 1], "b": [np.nan, 2.0, 3.0, 4.0]}, dtype="Int64")
df.sum()
df = df.astype("int64[pyarrow]")
df.sum()

Notice that the dtype is now a masked dtype and pyarrow dtype, respectively, while previously it was a numpy integer dtype.

.. _whatsnew_210.enhancements.enhancement2:

Expand Down
6 changes: 3 additions & 3 deletions pandas/core/array_algos/masked_reductions.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ def _reductions(
axis : int, optional, default None
"""
if not skipna:
if mask.any(axis=axis) or check_below_min_count(values.shape, None, min_count):
if mask.any() or check_below_min_count(values.shape, None, min_count):
return libmissing.NA
else:
return func(values, axis=axis, **kwargs)
Expand Down Expand Up @@ -119,11 +119,11 @@ def _minmax(
# min/max with empty array raise in numpy, pandas returns NA
return libmissing.NA
else:
return func(values)
return func(values, axis=axis)
else:
subset = values[~mask]
if subset.size:
return func(subset)
return func(subset, axis=axis)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that now you are using subset again on this line, passing this axis is not doing anything (and would actually raise an error if you would pass axis=1 here)

(it doesn't really matter in practice because we never call this with an axis=1, but seeing axis passed through might give the false impression that this algo actually supports 2D data, while that is not the case)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@topper-123 can you address this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I thought I had answered this, apparently not...

func here is either np.min or np.max, so supplying axis=axis will not raise here, but will work as expected AFAIKS.

Additionally, without the axis=axis part, func(subset) is similar to np.max|min(subset, axis=None). Not a problem for 1d arrays, but will be a problem if we ever want to support df.min(axis=None) using 2d masked arrays. (I'm not sure we want to support 2d masked arrays or are going all in on arrow?)

else:
# min/max with empty array raise in numpy, pandas returns NA
return libmissing.NA
Expand Down
6 changes: 6 additions & 0 deletions pandas/core/arrays/arrow/array.py
Original file line number Diff line number Diff line change
Expand Up @@ -1549,6 +1549,12 @@ def _reduce(self, name: str, *, skipna: bool = True, **kwargs):

return result.as_py()

def _reduce_and_wrap(self, name: str, *, skipna: bool = True, kwargs):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are you adding another method here? what's wrong with just fixing _reduce?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_reduce on 1d arrays only returns a scalar and we can't differentiate between scalars from reductions from e.g. numpy.int64 and pandas.Int64() arrays. Reductions that return pd.NA are just as bad, because pd.NA holds no dtype info.

Also, we can't supply keepdims to _reduce, because pandas raises when keepdims is given as a parameter in the reduction methods.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so why don't u just update _reduce?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There was a version similar to this that added a keepdims kwd to _reduce and we decided that this was better bc it didn't require a deprecation path for 3rd party EAs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_reduce calls other methods, e.g. sum. It's in those methods the failures happen when we give keepdims=True and those methods are public. Do we want to change their signatures (and the ._reduce signature) to include keepdims?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's just ading the keepdims keyword to _reduce, that will be relatively technically easy. It's adding it to sum etc. that probably will take more effort. Also note @jbrockmendel comment about deprecation path.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Compatibility concerns aside, I think the keepdims argument to _reduce is the nicer solution.
But, external EAs don't have this keyword, so that means we would need to add some compatibility code anyway, everywhere we call the _reduce method (check if it supports the new keyword, and if not still wrap the result in np.array([res]) array, just like the current base implementation of _reduce_and_wrap does). With that, I am not sure that will be an improvement over the current solution.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a hypothetical EA wanted to do a reduction lazily, that would be much easier with a keepdims keyword than with a _reduce_and_wrap method. Just a thought, not worth contorting ourselves over a hypothetical EA

Copy link
Contributor Author

@topper-123 topper-123 Jun 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not opposed to a keepdims parameter in _reduce, except for the compatibility concerns, but I would like a decision.

One way to address the compatibility concerns could be to introspect the signature of _reduce to see if it has a keepdims parameter or not. If it does, call _reduce with keepdims=True when doing dataframe reductions. If it doesn't, call it without a keepdims parameter, emit a warning that keepdims will become required in the future and wrap the scalar reduction result in a numpy array like result = np.array(result).reshape(1), to keep the current behavior.

In v3.0 we'll skip the signature introspection and make the keepdims parameter required.

"""Takes the result of ``_reduce`` and wraps it an a ndarray/extensionArray."""
result = self._reduce_pyarrow(name, skipna=skipna, **kwargs)
result = pa.array([result.as_py()], type=result.type)
return type(self)(result)

def __setitem__(self, key, value) -> None:
"""Set one or more values inplace.

Expand Down
29 changes: 29 additions & 0 deletions pandas/core/arrays/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,6 +136,7 @@ class ExtensionArray:
_from_sequence_of_strings
_hash_pandas_object
_reduce
_reduce_and_wrap
_values_for_argsort
_values_for_factorize

Expand Down Expand Up @@ -184,6 +185,7 @@ class ExtensionArray:

* _accumulate
* _reduce
* _reduce_and_wrap

One can implement methods to handle parsing from strings that will be used
in methods such as ``pandas.io.parsers.read_csv``.
Expand Down Expand Up @@ -1425,6 +1427,11 @@ def _reduce(self, name: str, *, skipna: bool = True, **kwargs):
Raises
------
TypeError : subclass does not define reductions

See Also
--------
ExtensionArray._reduce_and_wrap
Calls ``_reduce`` and wraps the result in a ndarray/ExtensionArray.
"""
meth = getattr(self, name, None)
if meth is None:
Expand All @@ -1434,6 +1441,28 @@ def _reduce(self, name: str, *, skipna: bool = True, **kwargs):
)
return meth(skipna=skipna, **kwargs)

def _reduce_and_wrap(self, name: str, *, skipna: bool = True, kwargs):
"""
Call ``_reduce`` and wrap the result in a ndarray/ExtensionArray.

This is used to control the returned dtype when doing reductions in DataFrames,
and ensures the correct dtype for e.g. ``DataFrame({"a": extr_arr2}).sum()``.

Returns
-------
ndarray or ExtensionArray

Examples
--------
>>> arr = pd.array([1, 2, pd.NA])
>>> arr._reduce_and_wrap("sum", kwargs={})
<IntegerArray>
[3]
Length: 1, dtype: Int64
"""
result = self._reduce(name, skipna=skipna, **kwargs)
return np.array([result])

# https://github.com/python/typeshed/issues/2148#issuecomment-520783318
# Incompatible types in assignment (expression has type "None", base class
# "object" defined the type as "Callable[[object], int]")
Expand Down
4 changes: 4 additions & 0 deletions pandas/core/arrays/categorical.py
Original file line number Diff line number Diff line change
Expand Up @@ -2124,6 +2124,10 @@ def _reverse_indexer(self) -> dict[Hashable, npt.NDArray[np.intp]]:
# ------------------------------------------------------------------
# Reductions

def _reduce_and_wrap(self, name: str, *, skipna: bool = True, kwargs):
result = self._reduce(name, skipna=skipna, **kwargs)
return type(self)([result], dtype=self.dtype)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we get here with e.g. any/all?

Copy link
Contributor Author

@topper-123 topper-123 May 23, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Categorical doesn't support any/all, IDK why actually, seems like it could, if the categories do.

Do you have any specific issue or other array in mind?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gentle ping...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe a comment that if any/all are ever supported then we shouldnt do this wrapping?


def min(self, *, skipna: bool = True, **kwargs):
"""
The minimum value of the object.
Expand Down
68 changes: 57 additions & 11 deletions pandas/core/arrays/masked.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,10 @@
Shape,
npt,
)
from pandas.compat import (
IS64,
is_platform_windows,
)
from pandas.errors import AbstractMethodError
from pandas.util._decorators import doc
from pandas.util._validators import validate_fillna_kwargs
Expand Down Expand Up @@ -1088,13 +1092,22 @@ def _reduce(self, name: str, *, skipna: bool = True, **kwargs):

# median, skew, kurt, sem
op = getattr(nanops, f"nan{name}")
result = op(data, axis=0, skipna=skipna, mask=mask, **kwargs)

axis = kwargs.pop("axis", None)
result = op(data, axis=axis, skipna=skipna, mask=mask, **kwargs)
if np.isnan(result):
return libmissing.NA

return result

def _reduce_and_wrap(self, name: str, *, skipna: bool = True, kwargs):
res = self._reduce(name=name, skipna=skipna, **kwargs)
if res is libmissing.NA:
return self._wrap_na_result(name=name, axis=0, mask_size=(1,))
else:
res = res.reshape(1)
mask = np.zeros(1, dtype=bool)
return self._maybe_mask_result(res, mask)

def _wrap_reduction_result(self, name: str, result, *, skipna, axis):
if isinstance(result, np.ndarray):
if skipna:
Expand All @@ -1106,6 +1119,32 @@ def _wrap_reduction_result(self, name: str, result, *, skipna, axis):
return self._maybe_mask_result(result, mask)
return result

def _wrap_na_result(self, *, name, axis, mask_size):
mask = np.ones(mask_size, dtype=bool)

float_dtyp = "float32" if self.dtype == "Float32" else "float64"
if name in ["mean", "median", "var", "std", "skew"]:
np_dtype = float_dtyp
elif name in ["min", "max"] or self.dtype.itemsize == 8:
np_dtype = self.dtype.numpy_dtype.name
else:
is_windows_or_32bit = is_platform_windows() or not IS64
int_dtyp = "int32" if is_windows_or_32bit else "int64"
uint_dtyp = "uint32" if is_windows_or_32bit else "uint64"
np_dtype = {"b": int_dtyp, "i": int_dtyp, "u": uint_dtyp, "f": float_dtyp}[
self.dtype.kind
]

value = np.array([1], dtype=np_dtype)
return self._maybe_mask_result(value, mask=mask)

def _wrap_min_count_reduction_result(
self, name: str, result, *, skipna, min_count, axis
):
if min_count == 0 and isinstance(result, np.ndarray):
return self._maybe_mask_result(result, np.zeros(result.shape, dtype=bool))
return self._wrap_reduction_result(name, result, skipna=skipna, axis=axis)

def sum(
self,
*,
Expand All @@ -1123,7 +1162,9 @@ def sum(
min_count=min_count,
axis=axis,
)
return self._wrap_reduction_result("sum", result, skipna=skipna, axis=axis)
return self._wrap_min_count_reduction_result(
"sum", result, skipna=skipna, min_count=min_count, axis=axis
)

def prod(
self,
Expand All @@ -1134,14 +1175,17 @@ def prod(
**kwargs,
):
nv.validate_prod((), kwargs)

result = masked_reductions.prod(
self._data,
self._mask,
skipna=skipna,
min_count=min_count,
axis=axis,
)
return self._wrap_reduction_result("prod", result, skipna=skipna, axis=axis)
return self._wrap_min_count_reduction_result(
"prod", result, skipna=skipna, min_count=min_count, axis=axis
)

def mean(self, *, skipna: bool = True, axis: AxisInt | None = 0, **kwargs):
nv.validate_mean((), kwargs)
Expand Down Expand Up @@ -1181,23 +1225,25 @@ def std(

def min(self, *, skipna: bool = True, axis: AxisInt | None = 0, **kwargs):
nv.validate_min((), kwargs)
return masked_reductions.min(
result = masked_reductions.min(
self._data,
self._mask,
skipna=skipna,
axis=axis,
)
return self._wrap_reduction_result("min", result, skipna=skipna, axis=axis)

def max(self, *, skipna: bool = True, axis: AxisInt | None = 0, **kwargs):
nv.validate_max((), kwargs)
return masked_reductions.max(
result = masked_reductions.max(
self._data,
self._mask,
skipna=skipna,
axis=axis,
)
return self._wrap_reduction_result("max", result, skipna=skipna, axis=axis)

def any(self, *, skipna: bool = True, **kwargs):
def any(self, *, skipna: bool = True, axis: AxisInt | None = 0, **kwargs):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it needed to add the axis keyword here (it's not actually being used?)

Copy link
Contributor Author

@topper-123 topper-123 May 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll look into it, could be connected to your previous comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this works, and I've made another version, will if it passes, and then I'll look into your other comments

"""
Return whether any element is truthy.

Expand All @@ -1216,6 +1262,7 @@ def any(self, *, skipna: bool = True, **kwargs):
If `skipna` is False, the result will still be True if there is
at least one element that is truthy, otherwise NA will be returned
if there are NA's present.
axis : int, optional, default 0
**kwargs : any, default None
Additional keywords have no effect but might be accepted for
compatibility with NumPy.
Expand Down Expand Up @@ -1259,7 +1306,6 @@ def any(self, *, skipna: bool = True, **kwargs):
>>> pd.array([0, 0, pd.NA]).any(skipna=False)
<NA>
"""
kwargs.pop("axis", None)
nv.validate_any((), kwargs)

values = self._data.copy()
Expand All @@ -1278,7 +1324,7 @@ def any(self, *, skipna: bool = True, **kwargs):
else:
return self.dtype.na_value

def all(self, *, skipna: bool = True, **kwargs):
def all(self, *, skipna: bool = True, axis: AxisInt | None = 0, **kwargs):
"""
Return whether all elements are truthy.

Expand All @@ -1297,6 +1343,7 @@ def all(self, *, skipna: bool = True, **kwargs):
If `skipna` is False, the result will still be False if there is
at least one element that is falsey, otherwise NA will be returned
if there are NA's present.
axis : int, optional, default 0
**kwargs : any, default None
Additional keywords have no effect but might be accepted for
compatibility with NumPy.
Expand Down Expand Up @@ -1340,7 +1387,6 @@ def all(self, *, skipna: bool = True, **kwargs):
>>> pd.array([1, 0, pd.NA]).all(skipna=False)
False
"""
kwargs.pop("axis", None)
nv.validate_all((), kwargs)

values = self._data.copy()
Expand All @@ -1350,7 +1396,7 @@ def all(self, *, skipna: bool = True, **kwargs):
# bool, int, float, complex, str, bytes,
# _NestedSequence[Union[bool, int, float, complex, str, bytes]]]"
np.putmask(values, self._mask, self._truthy_value) # type: ignore[arg-type]
result = values.all()
result = values.all(axis=axis)

if skipna:
return result
Expand Down
14 changes: 6 additions & 8 deletions pandas/core/frame.py
Original file line number Diff line number Diff line change
Expand Up @@ -10852,7 +10852,7 @@ def blk_func(values, axis: Axis = 1):
self._mgr, ArrayManager
):
return values._reduce(name, axis=1, skipna=skipna, **kwds)
return values._reduce(name, skipna=skipna, **kwds)
return values._reduce_and_wrap(name, skipna=skipna, kwargs=kwds)
else:
return op(values, axis=axis, skipna=skipna, **kwds)

Expand Down Expand Up @@ -10897,7 +10897,7 @@ def _get_data() -> DataFrame:
out = out.astype(out_dtype)
elif (df._mgr.get_dtypes() == object).any():
out = out.astype(object)
elif len(self) == 0 and name in ("sum", "prod"):
elif len(self) == 0 and out.dtype == object and name in ("sum", "prod"):
# Even if we are object dtype, follow numpy and return
# float64, see test_apply_funcs_over_empty
out = out.astype(np.float64)
Expand Down Expand Up @@ -11158,10 +11158,9 @@ def idxmin(
)
indices = res._values

# indices will always be np.ndarray since axis is not None and
# indices will always be 1d array since axis is not None and
# values is a 2d array for DataFrame
# error: Item "int" of "Union[int, Any]" has no attribute "__iter__"
assert isinstance(indices, np.ndarray) # for mypy
# indices will always be np.ndarray since axis is not N

index = data._get_axis(axis)
result = [index[i] if i >= 0 else np.nan for i in indices]
Expand All @@ -11188,10 +11187,9 @@ def idxmax(
)
indices = res._values

# indices will always be np.ndarray since axis is not None and
# indices will always be 1d array since axis is not None and
# values is a 2d array for DataFrame
# error: Item "int" of "Union[int, Any]" has no attribute "__iter__"
assert isinstance(indices, np.ndarray) # for mypy
assert isinstance(indices, (np.ndarray, ExtensionArray)) # for mypy

index = data._get_axis(axis)
result = [index[i] if i >= 0 else np.nan for i in indices]
Expand Down
Loading