Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: read_modify_write and check_and_mutate_row #780

Merged
merged 62 commits into from
Jun 16, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
1d02154
added initial implementation of mutate_rows
daniel-sanche Apr 24, 2023
ab63cba
implemented mutation models
daniel-sanche Apr 24, 2023
cf9daa5
added retries to mutate_row
daniel-sanche Apr 24, 2023
1247da4
return exception group if possible
daniel-sanche Apr 24, 2023
3b3ed8c
check for idempotence
daniel-sanche Apr 24, 2023
5d20037
initial implementation for bulk_mutations
daniel-sanche Apr 24, 2023
3d322a1
include successes in bulk mutation error message
daniel-sanche Apr 24, 2023
a31232b
fixed style checks
daniel-sanche Apr 24, 2023
8da2d65
added basic system tests
daniel-sanche Apr 24, 2023
2b89d9c
added unit tests for mutate_row
daniel-sanche Apr 25, 2023
47c5985
ran blacken
daniel-sanche Apr 25, 2023
38fdcd7
improved exceptions
daniel-sanche Apr 25, 2023
504d2d8
added bulk_mutate_rows unit tests
daniel-sanche Apr 25, 2023
b16067f
ran blacken
daniel-sanche Apr 25, 2023
3ab1405
support __new___ for exceptions for python3.11+
daniel-sanche Apr 25, 2023
0a6c0c6
added exception unit tests
daniel-sanche Apr 25, 2023
ec043cf
makde exceptions tuple
daniel-sanche Apr 26, 2023
518530e
got exceptions to print consistently across versions
daniel-sanche Apr 26, 2023
9624729
added test for 311 rich traceback
daniel-sanche Apr 27, 2023
3087081
moved retryable row mutations to new file
daniel-sanche Apr 27, 2023
9df588f
use index map
daniel-sanche Apr 27, 2023
7ed8be3
added docstring
daniel-sanche Apr 27, 2023
2536cc4
added predicate check to failed mutations
daniel-sanche Apr 27, 2023
1f6875c
added _mutate_rows tests
daniel-sanche Apr 27, 2023
1ea24e6
improved client tests
daniel-sanche Apr 27, 2023
25ca2d2
refactored to loop by raising exception
daniel-sanche Apr 28, 2023
c0787db
refactored retry deadline logic into shared wrapper
daniel-sanche Apr 28, 2023
3ed5c3d
ran black
daniel-sanche Apr 28, 2023
a91fbcb
pulled in table default timeouts
daniel-sanche Apr 28, 2023
df8a058
added tests for shared deadline parsing function
daniel-sanche Apr 28, 2023
b866b57
added tests for mutation models
daniel-sanche Apr 28, 2023
54a4d43
fixed linter errors
daniel-sanche Apr 28, 2023
bd51dc4
added tests for BulkMutationsEntry
daniel-sanche Apr 28, 2023
921b05a
improved mutations documentation
daniel-sanche Apr 28, 2023
82ea61f
refactored mutate_rows logic into helper function
daniel-sanche May 2, 2023
fa42b86
implemented callbacks for mutate_rows
daniel-sanche May 2, 2023
01a16f3
made exceptions into a tuple
daniel-sanche May 5, 2023
e6df77e
improved and tested read_modify_write_rules models
daniel-sanche May 18, 2023
2d8ee3f
implemented read_modify_write
daniel-sanche May 18, 2023
af77dc3
added unit tests
daniel-sanche May 18, 2023
ebe2f94
added system test
daniel-sanche May 18, 2023
8af5c71
added test for large values
daniel-sanche May 18, 2023
1242836
allow string for append value rule
daniel-sanche May 18, 2023
afe839c
added append value system test
daniel-sanche May 18, 2023
d0781d0
added chained value system test
daniel-sanche May 18, 2023
ef30977
support creating SetValueMutation with int
daniel-sanche May 19, 2023
6140acb
remove aborted from retryable errors
daniel-sanche May 22, 2023
36ba2b6
improved SetCell mutation
daniel-sanche May 22, 2023
b3c9017
fixed mutations tests
daniel-sanche May 22, 2023
cac9e2d
SetCell timestamps use millisecond precision
daniel-sanche May 22, 2023
34b051f
renamed BulkMutationsEntry to RowMutationEntry
daniel-sanche May 22, 2023
baf3378
implemented check_and_mutate
daniel-sanche May 22, 2023
bad11e5
added system tests
daniel-sanche May 22, 2023
1d79202
fixed test issues
daniel-sanche May 23, 2023
63ac35c
Merge branch 'v3' into mutate_rows
daniel-sanche May 24, 2023
4138c89
Merge branch 'mutate_rows' into mutate_rows_other_rpcs
daniel-sanche May 24, 2023
3c27fb7
Merge branch 'v3' into mutate_rows_other_rpcs
daniel-sanche Jun 7, 2023
b9b9dac
adjusted tests; require kwargs for check_and_mutate
daniel-sanche Jun 7, 2023
234ea6c
added metadata
daniel-sanche Jun 7, 2023
fb818d4
clean up
daniel-sanche Jun 8, 2023
c9cebc2
changed timeout values
daniel-sanche Jun 14, 2023
ef8879e
Merge branch 'v3' into mutate_rows_other_rpcs
daniel-sanche Jun 16, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
71 changes: 60 additions & 11 deletions google/cloud/bigtable/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,15 +55,15 @@
from google.cloud.bigtable._helpers import _make_metadata
from google.cloud.bigtable._helpers import _convert_retry_deadline

from google.cloud.bigtable.read_modify_write_rules import ReadModifyWriteRule
from google.cloud.bigtable.row_filters import RowFilter
from google.cloud.bigtable.row_filters import StripValueTransformerFilter
from google.cloud.bigtable.row_filters import CellsRowLimitFilter
from google.cloud.bigtable.row_filters import RowFilterChain

if TYPE_CHECKING:
from google.cloud.bigtable.mutations_batcher import MutationsBatcher
from google.cloud.bigtable import RowKeySamples
from google.cloud.bigtable.row_filters import RowFilter
from google.cloud.bigtable.read_modify_write_rules import ReadModifyWriteRule


class BigtableDataClient(ClientWithProject):
Expand Down Expand Up @@ -770,10 +770,11 @@ async def bulk_mutate_rows(
async def check_and_mutate_row(
self,
row_key: str | bytes,
predicate: RowFilter | None,
predicate: RowFilter | dict[str, Any] | None,
*,
true_case_mutations: Mutation | list[Mutation] | None = None,
false_case_mutations: Mutation | list[Mutation] | None = None,
operation_timeout: int | float | None = 60,
operation_timeout: int | float | None = 20,
) -> bool:
"""
Mutates a row atomically based on the output of a predicate filter
Expand Down Expand Up @@ -807,17 +808,43 @@ async def check_and_mutate_row(
Raises:
- GoogleAPIError exceptions from grpc call
"""
raise NotImplementedError
operation_timeout = operation_timeout or self.default_operation_timeout
if operation_timeout <= 0:
raise ValueError("operation_timeout must be greater than 0")
row_key = row_key.encode("utf-8") if isinstance(row_key, str) else row_key
if true_case_mutations is not None and not isinstance(
true_case_mutations, list
):
true_case_mutations = [true_case_mutations]
true_case_dict = [m._to_dict() for m in true_case_mutations or []]
if false_case_mutations is not None and not isinstance(
false_case_mutations, list
):
false_case_mutations = [false_case_mutations]
false_case_dict = [m._to_dict() for m in false_case_mutations or []]
if predicate is not None and not isinstance(predicate, dict):
predicate = predicate.to_dict()
metadata = _make_metadata(self.table_name, self.app_profile_id)
result = await self.client._gapic_client.check_and_mutate_row(
request={
"predicate_filter": predicate,
"true_mutations": true_case_dict,
"false_mutations": false_case_dict,
"table_name": self.table_name,
"row_key": row_key,
"app_profile_id": self.app_profile_id,
},
metadata=metadata,
timeout=operation_timeout,
)
return result.predicate_matched

async def read_modify_write_row(
self,
row_key: str | bytes,
rules: ReadModifyWriteRule
| list[ReadModifyWriteRule]
| dict[str, Any]
| list[dict[str, Any]],
rules: ReadModifyWriteRule | list[ReadModifyWriteRule],
*,
operation_timeout: int | float | None = 60,
operation_timeout: int | float | None = 20,
) -> Row:
"""
Reads and modifies a row atomically according to input ReadModifyWriteRules,
Expand All @@ -841,7 +868,29 @@ async def read_modify_write_row(
Raises:
- GoogleAPIError exceptions from grpc call
"""
raise NotImplementedError
operation_timeout = operation_timeout or self.default_operation_timeout
row_key = row_key.encode("utf-8") if isinstance(row_key, str) else row_key
if operation_timeout <= 0:
raise ValueError("operation_timeout must be greater than 0")
if rules is not None and not isinstance(rules, list):
rules = [rules]
if not rules:
raise ValueError("rules must contain at least one item")
# concert to dict representation
rules_dict = [rule._to_dict() for rule in rules]
metadata = _make_metadata(self.table_name, self.app_profile_id)
result = await self.client._gapic_client.read_modify_write_row(
request={
"rules": rules_dict,
"table_name": self.table_name,
"row_key": row_key,
"app_profile_id": self.app_profile_id,
},
metadata=metadata,
timeout=operation_timeout,
)
# construct Row from result
return Row._from_pb(result.row)

async def close(self):
"""
Expand Down
6 changes: 6 additions & 0 deletions google/cloud/bigtable/mutations.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@
from dataclasses import dataclass
from abc import ABC, abstractmethod

from google.cloud.bigtable.read_modify_write_rules import MAX_INCREMENT_VALUE

# special value for SetCell mutation timestamps. If set, server will assign a timestamp
SERVER_SIDE_TIMESTAMP = -1

Expand Down Expand Up @@ -99,6 +101,10 @@ def __init__(
if isinstance(new_value, str):
new_value = new_value.encode()
elif isinstance(new_value, int):
if abs(new_value) > MAX_INCREMENT_VALUE:
raise ValueError(
"int values must be between -2**63 and 2**63 (64-bit signed int)"
)
new_value = new_value.to_bytes(8, "big", signed=True)
if not isinstance(new_value, bytes):
raise TypeError("new_value must be bytes, str, or int")
Expand Down
59 changes: 48 additions & 11 deletions google/cloud/bigtable/read_modify_write_rules.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,22 +14,59 @@
#
from __future__ import annotations

from dataclasses import dataclass
import abc

# value must fit in 64-bit signed integer
MAX_INCREMENT_VALUE = (1 << 63) - 1

class ReadModifyWriteRule:
pass

class ReadModifyWriteRule(abc.ABC):
def __init__(self, family: str, qualifier: bytes | str):
qualifier = (
qualifier if isinstance(qualifier, bytes) else qualifier.encode("utf-8")
)
self.family = family
self.qualifier = qualifier

@abc.abstractmethod
def _to_dict(self):
raise NotImplementedError


@dataclass
class IncrementRule(ReadModifyWriteRule):
increment_amount: int
family: str
qualifier: bytes
def __init__(self, family: str, qualifier: bytes | str, increment_amount: int = 1):
if not isinstance(increment_amount, int):
raise TypeError("increment_amount must be an integer")
if abs(increment_amount) > MAX_INCREMENT_VALUE:
raise ValueError(
"increment_amount must be between -2**63 and 2**63 (64-bit signed int)"
)
super().__init__(family, qualifier)
self.increment_amount = increment_amount

def _to_dict(self):
return {
"family_name": self.family,
"column_qualifier": self.qualifier,
"increment_amount": self.increment_amount,
}


@dataclass
class AppendValueRule(ReadModifyWriteRule):
append_value: bytes
family: str
qualifier: bytes
def __init__(self, family: str, qualifier: bytes | str, append_value: bytes | str):
append_value = (
append_value.encode("utf-8")
if isinstance(append_value, str)
else append_value
)
if not isinstance(append_value, bytes):
raise TypeError("append_value must be bytes or str")
super().__init__(family, qualifier)
self.append_value = append_value

def _to_dict(self):
return {
"family_name": self.family,
"column_qualifier": self.qualifier,
"append_value": self.append_value,
}
26 changes: 26 additions & 0 deletions google/cloud/bigtable/row.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@
from typing import Sequence, Generator, overload, Any
from functools import total_ordering

from google.cloud.bigtable_v2.types import Row as RowPB

# Type aliases used internally for readability.
_family_type = str
_qualifier_type = bytes
Expand Down Expand Up @@ -72,6 +74,30 @@ def _index(
).append(cell)
return self._index_data

@classmethod
def _from_pb(cls, row_pb: RowPB) -> Row:
"""
Creates a row from a protobuf representation

Row objects are not intended to be created by users.
They are returned by the Bigtable backend.
"""
row_key: bytes = row_pb.key
cell_list: list[Cell] = []
for family in row_pb.families:
for column in family.columns:
for cell in column.cells:
new_cell = Cell(
value=cell.value,
row_key=row_key,
family=family.name,
qualifier=column.qualifier,
timestamp_micros=cell.timestamp_micros,
labels=list(cell.labels) if cell.labels else None,
)
cell_list.append(new_cell)
return cls(row_key, cells=cell_list)

def get_cells(
self, family: str | None = None, qualifier: str | bytes | None = None
) -> list[Cell]:
Expand Down
Loading