Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add Couchbase Columnar as an Offline Store #5025

Open
wants to merge 15 commits into
base: master
Choose a base branch
from
27 changes: 27 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -402,6 +402,33 @@ test-python-universal-qdrant-online:
-k "test_retrieve_online_documents" \
sdk/python/tests/integration/online_store/test_universal_online.py

# To use Couchbase as an offline store, you need to create an Couchbase Capella Columnar cluster on cloud.couchbase.com.
# Modify environment variables COUCHBASE_COLUMNAR_CONNECTION_STRING, COUCHBASE_COLUMNAR_USER, and COUCHBASE_COLUMNAR_PASSWORD
# with the details from your Couchbase Columnar Cluster.
test-python-universal-couchbase-offline:
PYTHONPATH='.' \
FULL_REPO_CONFIGS_MODULE=sdk.python.feast.infra.offline_stores.contrib.couchbase_columnar_repo_configuration \
PYTEST_PLUGINS=feast.infra.offline_stores.contrib.couchbase_offline_store.tests \
COUCHBASE_COLUMNAR_CONNECTION_STRING=couchbases://<connection_string> \
COUCHBASE_COLUMNAR_USER=username \
COUCHBASE_COLUMNAR_PASSWORD=password \
python -m pytest -n 8 --integration \
-k "not test_historical_retrieval_with_validation and \
not test_historical_features_persisting and \
not test_universal_cli and \
not test_go_feature_server and \
not test_feature_logging and \
not test_reorder_columns and \
not test_logged_features_validation and \
not test_lambda_materialization_consistency and \
not test_offline_write and \
not test_push_features_to_offline_store and \
not gcs_registry and \
not s3_registry and \
not test_snowflake and \
not test_universal_types" \
sdk/python/tests

test-python-universal-couchbase-online:
PYTHONPATH='.' \
FULL_REPO_CONFIGS_MODULE=sdk.python.feast.infra.online_stores.contrib.couchbase_repo_configuration \
Expand Down
1 change: 1 addition & 0 deletions docs/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,7 @@
* [BigQuery](reference/offline-stores/bigquery.md)
* [Redshift](reference/offline-stores/redshift.md)
* [DuckDB](reference/offline-stores/duckdb.md)
* [Couchbase Columnar (contrib)](reference/offline-stores/couchbase.md)
* [Spark (contrib)](reference/offline-stores/spark.md)
* [PostgreSQL (contrib)](reference/offline-stores/postgres.md)
* [Trino (contrib)](reference/offline-stores/trino.md)
Expand Down
4 changes: 4 additions & 0 deletions docs/reference/data-sources/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,10 @@ Please see [Data Source](../../getting-started/concepts/data-ingestion.md) for a
[kinesis.md](kinesis.md)
{% endcontent-ref %}

{% content-ref url="couchbase.md" %}
[couchbase.md](couchbase.md)
{% endcontent-ref %}

{% content-ref url="spark.md" %}
[spark.md](spark.md)
{% endcontent-ref %}
Expand Down
37 changes: 37 additions & 0 deletions docs/reference/data-sources/couchbase.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# Couchbase Columnar source (contrib)

## Description

Couchbase Columnar data sources are [Couchbase Capella Columnar](https://docs.couchbase.com/columnar/intro/intro.html) collections that can be used as a source for feature data. **Note that Couchbase Columnar is available through [Couchbase Capella](https://cloud.couchbase.com/).**

## Disclaimer

The Couchbase Columnar data source does not achieve full test coverage.
Please do not assume complete stability.

## Examples

Defining a Couchbase Columnar source:

```python
from feast.infra.offline_stores.contrib.couchbase_offline_store.couchbase_source import (
CouchbaseColumnarSource,
)

driver_stats_source = CouchbaseColumnarSource(
name="driver_hourly_stats_source",
query="SELECT * FROM Default.Default.`feast_driver_hourly_stats`",
database="Default",
scope="Default",
collection="feast_driver_hourly_stats",
timestamp_field="event_timestamp",
created_timestamp_column="created",
)
```

The full set of configuration options is available [here](https://rtd.feast.dev/en/master/#feast.infra.offline_stores.contrib.couchbase_offline_store.couchbase_source.CouchbaseColumnarSource).

## Supported Types

Couchbase Capella Columnar data sources support `BOOLEAN`, `STRING`, `BIGINT`, and `DOUBLE` primitive types.
For a comparison against other batch data sources, please see [here](overview.md#functionality-matrix).
22 changes: 11 additions & 11 deletions docs/reference/data-sources/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,14 +18,14 @@ Details for each specific data source can be found [here](README.md).

Below is a matrix indicating which data sources support which types.

| | File | BigQuery | Snowflake | Redshift | Postgres | Spark | Trino |
| :-------------------------------- | :-- | :-- |:----------| :-- | :-- | :-- | :-- |
| `bytes` | yes | yes | yes | yes | yes | yes | yes |
| `string` | yes | yes | yes | yes | yes | yes | yes |
| `int32` | yes | yes | yes | yes | yes | yes | yes |
| `int64` | yes | yes | yes | yes | yes | yes | yes |
| `float32` | yes | yes | yes | yes | yes | yes | yes |
| `float64` | yes | yes | yes | yes | yes | yes | yes |
| `bool` | yes | yes | yes | yes | yes | yes | yes |
| `timestamp` | yes | yes | yes | yes | yes | yes | yes |
| array types | yes | yes | yes | no | yes | yes | no |
| | File | BigQuery | Snowflake | Redshift | Postgres | Spark | Trino | Couchbase |
| :-------------------------------- | :-- | :-- |:----------| :-- | :-- | :-- | :-- |:----------|
| `bytes` | yes | yes | yes | yes | yes | yes | yes | yes |
| `string` | yes | yes | yes | yes | yes | yes | yes | yes |
| `int32` | yes | yes | yes | yes | yes | yes | yes | yes |
| `int64` | yes | yes | yes | yes | yes | yes | yes | yes |
| `float32` | yes | yes | yes | yes | yes | yes | yes | yes |
| `float64` | yes | yes | yes | yes | yes | yes | yes | yes |
| `bool` | yes | yes | yes | yes | yes | yes | yes | yes |
| `timestamp` | yes | yes | yes | yes | yes | yes | yes | yes |
| array types | yes | yes | yes | no | yes | yes | no | no |
4 changes: 4 additions & 0 deletions docs/reference/offline-stores/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,10 @@ Please see [Offline Store](../../getting-started/components/offline-store.md) fo
[duckdb.md](duckdb.md)
{% endcontent-ref %}

{% content-ref url="couchbase.md" %}
[couchbase.md](couchbase.md)
{% endcontent-ref %}

{% content-ref url="spark.md" %}
[spark.md](spark.md)
{% endcontent-ref %}
Expand Down
79 changes: 79 additions & 0 deletions docs/reference/offline-stores/couchbase.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# Couchbase Columnar offline store (contrib)

## Description

The Couchbase Columnar offline store provides support for reading [CouchbaseColumnarSources](../data-sources/couchbase.md). **Note that Couchbase Columnar is available through [Couchbase Capella](https://cloud.couchbase.com/).**
* Entity dataframes can be provided as a SQL++ query or can be provided as a Pandas dataframe. A Pandas dataframe will be uploaded to Couchbase Capella Columnar as a collection.

## Disclaimer

The Couchbase Columnar offline store does not achieve full test coverage.
Please do not assume complete stability.

## Getting started

In order to use this offline store, you'll need to run `pip install 'feast[couchbase]'`. You can get started by then running `feast init -t couchbase`.

To get started with Couchbase Capella Columnar:
1. Sign up for a [Couchbase Capella](https://cloud.couchbase.com/) account
2. [Deploy a Columnar cluster](https://docs.couchbase.com/columnar/admin/prepare-project.html)
3. [Create an Access Control Account](https://docs.couchbase.com/columnar/admin/auth/auth-data.html)
- This account should be able to read and write.
- For testing purposes, it is recommended to assign all roles to avoid any permission issues.
4. [Configure allowed IP addresses](https://docs.couchbase.com/columnar/admin/ip-allowed-list.html)
- You must allow the IP address of the machine running Feast.


## Example

{% code title="feature_store.yaml" %}
```yaml
project: my_project
registry: data/registry.db
provider: local
offline_store:
type: couchbase
connection_string: COUCHBASE_COLUMNAR_CONNECTION_STRING # Copied from Settings > Connection String page in Capella Columnar console, starts with couchbases://
user: COUCHBASE_COLUMNAR_USER # Couchbase cluster access name from Settings > Access Control page in Capella Columnar console
password: COUCHBASE_COLUMNAR_PASSWORD # Couchbase password from Settings > Access Control page in Capella Columnar console
timeout: 120 # Timeout in seconds for Columnar operations, optional
online_store:
path: data/online_store.db
```
{% endcode %}

Note that `timeout`is an optional parameter.
The full set of configuration options is available in [CouchbaseColumnarOfflineStoreConfig](https://rtd.feast.dev/en/master/#feast.infra.offline_stores.contrib.couchbase_offline_store.couchbase.CouchbaseColumnarOfflineStoreConfig).


## Functionality Matrix

The set of functionality supported by offline stores is described in detail [here](overview.md#functionality).
Below is a matrix indicating which functionality is supported by the Couchbase Columnar offline store.

| | Couchbase Columnar |
| :----------------------------------------------------------------- |:-------------------|
| `get_historical_features` (point-in-time correct join) | yes |
| `pull_latest_from_table_or_query` (retrieve latest feature values) | yes |
| `pull_all_from_table_or_query` (retrieve a saved dataset) | yes |
| `offline_write_batch` (persist dataframes to offline store) | no |
| `write_logged_features` (persist logged features to offline store) | no |

Below is a matrix indicating which functionality is supported by `CouchbaseColumnarRetrievalJob`.

| | Couchbase Columnar |
| ----------------------------------------------------- |--------------------|
| export to dataframe | yes |
| export to arrow table | yes |
| export to arrow batches | no |
| export to SQL | yes |
| export to data lake (S3, GCS, etc.) | yes |
| export to data warehouse | yes |
| export as Spark dataframe | no |
| local execution of Python-based on-demand transforms | yes |
| remote execution of Python-based on-demand transforms | no |
| persist results in the offline store | yes |
| preview the query plan before execution | yes |
| read partitioned data | yes |

To compare this set of functionality against other offline stores, please see the full [functionality matrix](overview.md#functionality-matrix).
42 changes: 21 additions & 21 deletions docs/reference/offline-stores/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,28 +31,28 @@ Details for each specific offline store, such as how to configure it in a `featu

Below is a matrix indicating which offline stores support which methods.

| | Dask | BigQuery | Snowflake | Redshift | Postgres | Spark | Trino |
| :-------------------------------- | :-- | :-- | :-- | :-- | :-- | :-- | :-- |
| `get_historical_features` | yes | yes | yes | yes | yes | yes | yes |
| `pull_latest_from_table_or_query` | yes | yes | yes | yes | yes | yes | yes |
| `pull_all_from_table_or_query` | yes | yes | yes | yes | yes | yes | yes |
| `offline_write_batch` | yes | yes | yes | yes | no | no | no |
| `write_logged_features` | yes | yes | yes | yes | no | no | no |
| | Dask | BigQuery | Snowflake | Redshift | Postgres | Spark | Trino | Couchbase |
| :-------------------------------- | :-- | :-- | :-- | :-- | :-- | :-- | :-- | :-- |
| `get_historical_features` | yes | yes | yes | yes | yes | yes | yes | yes |
| `pull_latest_from_table_or_query` | yes | yes | yes | yes | yes | yes | yes | yes |
| `pull_all_from_table_or_query` | yes | yes | yes | yes | yes | yes | yes | yes |
| `offline_write_batch` | yes | yes | yes | yes | no | no | no | no |
| `write_logged_features` | yes | yes | yes | yes | no | no | no | no |


Below is a matrix indicating which `RetrievalJob`s support what functionality.

| | Dask | BigQuery | Snowflake | Redshift | Postgres | Spark | Trino | DuckDB |
| --------------------------------- | --- | --- | --- | --- | --- | --- | --- | --- |
| export to dataframe | yes | yes | yes | yes | yes | yes | yes | yes |
| export to arrow table | yes | yes | yes | yes | yes | yes | yes | yes |
| export to arrow batches | no | no | no | yes | no | no | no | no |
| export to SQL | no | yes | yes | yes | yes | no | yes | no |
| export to data lake (S3, GCS, etc.) | no | no | yes | no | yes | no | no | no |
| export to data warehouse | no | yes | yes | yes | yes | no | no | no |
| export as Spark dataframe | no | no | yes | no | no | yes | no | no |
| local execution of Python-based on-demand transforms | yes | yes | yes | yes | yes | no | yes | yes |
| remote execution of Python-based on-demand transforms | no | no | no | no | no | no | no | no |
| persist results in the offline store | yes | yes | yes | yes | yes | yes | no | yes |
| preview the query plan before execution | yes | yes | yes | yes | yes | yes | yes | no |
| read partitioned data | yes | yes | yes | yes | yes | yes | yes | yes |
| | Dask | BigQuery | Snowflake | Redshift | Postgres | Spark | Trino | DuckDB | Couchbase |
| --------------------------------- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| export to dataframe | yes | yes | yes | yes | yes | yes | yes | yes | yes |
| export to arrow table | yes | yes | yes | yes | yes | yes | yes | yes | yes |
| export to arrow batches | no | no | no | yes | no | no | no | no | no |
| export to SQL | no | yes | yes | yes | yes | no | yes | no | yes |
| export to data lake (S3, GCS, etc.) | no | no | yes | no | yes | no | no | no | yes |
| export to data warehouse | no | yes | yes | yes | yes | no | no | no | yes |
| export as Spark dataframe | no | no | yes | no | no | yes | no | no | no |
| local execution of Python-based on-demand transforms | yes | yes | yes | yes | yes | no | yes | yes | yes |
| remote execution of Python-based on-demand transforms | no | no | no | no | no | no | no | no | no |
| persist results in the offline store | yes | yes | yes | yes | yes | yes | no | yes | yes |
| preview the query plan before execution | yes | yes | yes | yes | yes | yes | yes | no | yes |
| read partitioned data | yes | yes | yes | yes | yes | yes | yes | yes | yes |
Loading
Loading