Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(docs): remove preview warning for async data client #945

Merged
merged 3 commits into from
Mar 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 2 additions & 4 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,18 +21,16 @@ Analytics, Maps, and Gmail.
.. _Product Documentation: https://cloud.google.com/bigtable/docs


Preview Async Data Client
Async Data Client
-------------------------

:code:`v2.23.0` includes a preview release of the new :code:`BigtableDataClientAsync` client, accessible at the import path
:code:`v2.23.0` includes a release of the new :code:`BigtableDataClientAsync` client, accessible at the import path
:code:`google.cloud.bigtable.data`.

The new client brings a simplified API and increased performance using asyncio, with a corresponding synchronous surface
coming soon. The new client is focused on the data API (i.e. reading and writing Bigtable data), with admin operations
remaining in the existing client.

:code:`BigtableDataClientAsync` is currently in preview, and is not recommended for production use.

Feedback and bug reports are welcome at [email protected],
or through the Github `issue tracker`_.

Expand Down
7 changes: 7 additions & 0 deletions docs/data-api.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,13 @@
Data API
========

.. note::
This page describes how to use the Data API with the synchronous Bigtable client.
Examples for using the Data API with the async client can be found in the
`Getting Started Guide`_.

.. _Getting Started Guide: https://cloud.google.com/bigtable/docs/samples-python-hello

After creating a :class:`Table <google.cloud.bigtable.table.Table>` and some
column families, you are ready to store and retrieve data.

Expand Down
6 changes: 2 additions & 4 deletions google/cloud/bigtable/data/README.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
Async Data Client Preview
=========================

This new client is currently in preview, and is not recommended for production use.
Async Data Client
=================

Synchronous API surface and usage examples coming soon

Expand Down
36 changes: 0 additions & 36 deletions google/cloud/bigtable/data/_async/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,9 +101,6 @@ def __init__(

Client should be created within an async context (running event loop)

Warning: BigtableDataClientAsync is currently in preview, and is not
yet recommended for production use.

Args:
project: the project which the client acts on behalf of.
If not passed, falls back to the default inferred
Expand Down Expand Up @@ -566,9 +563,6 @@ async def read_rows_stream(
Failed requests within operation_timeout will be retried based on the
retryable_errors list until operation_timeout is reached.

Warning: BigtableDataClientAsync is currently in preview, and is not
yet recommended for production use.

Args:
- query: contains details about which rows to return
- operation_timeout: the time budget for the entire operation, in seconds.
Expand Down Expand Up @@ -620,9 +614,6 @@ async def read_rows(
Failed requests within operation_timeout will be retried based on the
retryable_errors list until operation_timeout is reached.

Warning: BigtableDataClientAsync is currently in preview, and is not
yet recommended for production use.

Args:
- query: contains details about which rows to return
- operation_timeout: the time budget for the entire operation, in seconds.
Expand Down Expand Up @@ -669,9 +660,6 @@ async def read_row(
Failed requests within operation_timeout will be retried based on the
retryable_errors list until operation_timeout is reached.

Warning: BigtableDataClientAsync is currently in preview, and is not
yet recommended for production use.

Args:
- query: contains details about which rows to return
- operation_timeout: the time budget for the entire operation, in seconds.
Expand Down Expand Up @@ -727,9 +715,6 @@ async def read_rows_sharded(
results = await table.read_rows_sharded(shard_queries)
```

Warning: BigtableDataClientAsync is currently in preview, and is not
yet recommended for production use.

Args:
- sharded_query: a sharded query to execute
- operation_timeout: the time budget for the entire operation, in seconds.
Expand Down Expand Up @@ -810,9 +795,6 @@ async def row_exists(
Return a boolean indicating whether the specified row exists in the table.
uses the filters: chain(limit cells per row = 1, strip value)

Warning: BigtableDataClientAsync is currently in preview, and is not
yet recommended for production use.

Args:
- row_key: the key of the row to check
- operation_timeout: the time budget for the entire operation, in seconds.
Expand Down Expand Up @@ -867,9 +849,6 @@ async def sample_row_keys(
RowKeySamples is simply a type alias for list[tuple[bytes, int]]; a list of
row_keys, along with offset positions in the table

Warning: BigtableDataClientAsync is currently in preview, and is not
yet recommended for production use.

Args:
- operation_timeout: the time budget for the entire operation, in seconds.
Failed requests will be retried within the budget.i
Expand Down Expand Up @@ -942,9 +921,6 @@ def mutations_batcher(
Can be used to iteratively add mutations that are flushed as a group,
to avoid excess network calls

Warning: BigtableDataClientAsync is currently in preview, and is not
yet recommended for production use.

Args:
- flush_interval: Automatically flush every flush_interval seconds. If None,
a table default will be used
Expand Down Expand Up @@ -994,9 +970,6 @@ async def mutate_row(
Idempotent operations (i.e, all mutations have an explicit timestamp) will be
retried on server failure. Non-idempotent operations will not.

Warning: BigtableDataClientAsync is currently in preview, and is not
yet recommended for production use.

Args:
- row_key: the row to apply mutations to
- mutations: the set of mutations to apply to the row
Expand Down Expand Up @@ -1077,9 +1050,6 @@ async def bulk_mutate_rows(
will be retried on failure. Non-idempotent will not, and will reported in a
raised exception group

Warning: BigtableDataClientAsync is currently in preview, and is not
yet recommended for production use.

Args:
- mutation_entries: the batches of mutations to apply
Each entry will be applied atomically, but entries will be applied
Expand Down Expand Up @@ -1128,9 +1098,6 @@ async def check_and_mutate_row(

Non-idempotent operation: will not be retried

Warning: BigtableDataClientAsync is currently in preview, and is not
yet recommended for production use.

Args:
- row_key: the key of the row to mutate
- predicate: the filter to be applied to the contents of the specified row.
Expand Down Expand Up @@ -1199,9 +1166,6 @@ async def read_modify_write_row(

Non-idempotent operation: will not be retried

Warning: BigtableDataClientAsync is currently in preview, and is not
yet recommended for production use.

Args:
- row_key: the key of the row to apply read/modify/write rules to
- rules: A rule or set of rules to apply to the row.
Expand Down
Loading