You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, we have a single function xarray.decode_cf() that we apply to data loaded from all xarray backends.
This is appropriate for netCDF data, but it's not appropriate for backends with different implementations. For example, it doesn't work for zarr (which is why we have the separate open_zarr), and is also a poor fit for PseudoNetCDF (#1905). In the worst cases (e.g., for PseudoNetCDF) it can actually result in data being decoded twice, which can result in incorrectly scaled data.
Instead, we should declare default decoders as part of the backend API, and use those decoders as the defaults for open_dataset().
This should probably be tackled as part of the broader backends refactor: #1970
The text was updated successfully, but these errors were encountered:
Currently, we have a single function
xarray.decode_cf()
that we apply to data loaded from all xarray backends.This is appropriate for netCDF data, but it's not appropriate for backends with different implementations. For example, it doesn't work for zarr (which is why we have the separate
open_zarr
), and is also a poor fit for PseudoNetCDF (#1905). In the worst cases (e.g., for PseudoNetCDF) it can actually result in data being decoded twice, which can result in incorrectly scaled data.Instead, we should declare default decoders as part of the backend API, and use those decoders as the defaults for
open_dataset()
.This should probably be tackled as part of the broader backends refactor: #1970
The text was updated successfully, but these errors were encountered: