From adc2939fd419d7f8f6ba02cf24703dc42dd7d873 Mon Sep 17 00:00:00 2001 From: Tornike Gurgenidze Date: Sat, 4 May 2024 10:01:21 +0400 Subject: [PATCH] docs: Add DuckDB offline store (#4174) --- docs/SUMMARY.md | 1 + docs/reference/data-sources/file.md | 6 +-- docs/reference/data-sources/overview.md | 4 +- docs/reference/offline-stores/README.md | 4 ++ docs/reference/offline-stores/duckdb.md | 56 +++++++++++++++++++++++ docs/reference/offline-stores/overview.md | 28 ++++++------ 6 files changed, 78 insertions(+), 21 deletions(-) create mode 100644 docs/reference/offline-stores/duckdb.md diff --git a/docs/SUMMARY.md b/docs/SUMMARY.md index 3673edf6cf..2e205dee0a 100644 --- a/docs/SUMMARY.md +++ b/docs/SUMMARY.md @@ -80,6 +80,7 @@ * [Snowflake](reference/offline-stores/snowflake.md) * [BigQuery](reference/offline-stores/bigquery.md) * [Redshift](reference/offline-stores/redshift.md) + * [DuckDB](reference/offline-stores/duckdb.md) * [Spark (contrib)](reference/offline-stores/spark.md) * [PostgreSQL (contrib)](reference/offline-stores/postgres.md) * [Trino (contrib)](reference/offline-stores/trino.md) diff --git a/docs/reference/data-sources/file.md b/docs/reference/data-sources/file.md index 5895b1a8ce..d3fd09deca 100644 --- a/docs/reference/data-sources/file.md +++ b/docs/reference/data-sources/file.md @@ -3,11 +3,7 @@ ## Description File data sources are files on disk or on S3. -Currently only Parquet files are supported. - -{% hint style="warning" %} -FileSource is meant for development purposes only and is not optimized for production use. -{% endhint %} +Currently only Parquet and Delta formats are supported. ## Example diff --git a/docs/reference/data-sources/overview.md b/docs/reference/data-sources/overview.md index 302c19b049..5c2fdce9fd 100644 --- a/docs/reference/data-sources/overview.md +++ b/docs/reference/data-sources/overview.md @@ -2,8 +2,8 @@ ## Functionality -In Feast, each batch data source is associated with a corresponding offline store. -For example, a `SnowflakeSource` can only be processed by the Snowflake offline store. +In Feast, each batch data source is associated with corresponding offline stores. +For example, a `SnowflakeSource` can only be processed by the Snowflake offline store, while a `FileSource` can be processed by both File and DuckDB offline stores. Otherwise, the primary difference between batch data sources is the set of supported types. Feast has an internal type system, and aims to support eight primitive types (`bytes`, `string`, `int32`, `int64`, `float32`, `float64`, `bool`, and `timestamp`) along with the corresponding array types. However, not every batch data source supports all of these types. diff --git a/docs/reference/offline-stores/README.md b/docs/reference/offline-stores/README.md index f4e3af2f34..33eca6d426 100644 --- a/docs/reference/offline-stores/README.md +++ b/docs/reference/offline-stores/README.md @@ -22,6 +22,10 @@ Please see [Offline Store](../../getting-started/architecture-and-components/off [redshift.md](redshift.md) {% endcontent-ref %} +{% content-ref url="duckdb.md" %} +[duckdb.md](duckdb.md) +{% endcontent-ref %} + {% content-ref url="spark.md" %} [spark.md](spark.md) {% endcontent-ref %} diff --git a/docs/reference/offline-stores/duckdb.md b/docs/reference/offline-stores/duckdb.md new file mode 100644 index 0000000000..da3c3cd0c7 --- /dev/null +++ b/docs/reference/offline-stores/duckdb.md @@ -0,0 +1,56 @@ +# DuckDB offline store + +## Description + +The duckdb offline store provides support for reading [FileSources](../data-sources/file.md). It can read both Parquet and Delta formats. DuckDB offline store uses [ibis](https://ibis-project.org/) under the hood to convert offline store operations to DuckDB queries. + +* Entity dataframes can be provided as a Pandas dataframe. + +## Getting started +In order to use this offline store, you'll need to run `pip install 'feast[duckdb]'`. + +## Example + +{% code title="feature_store.yaml" %} +```yaml +project: my_project +registry: data/registry.db +provider: local +offline_store: + type: duckdb +online_store: + path: data/online_store.db +``` +{% endcode %} + +## Functionality Matrix + +The set of functionality supported by offline stores is described in detail [here](overview.md#functionality). +Below is a matrix indicating which functionality is supported by the DuckDB offline store. + +| | DuckdDB | +| :----------------------------------------------------------------- | :---- | +| `get_historical_features` (point-in-time correct join) | yes | +| `pull_latest_from_table_or_query` (retrieve latest feature values) | yes | +| `pull_all_from_table_or_query` (retrieve a saved dataset) | yes | +| `offline_write_batch` (persist dataframes to offline store) | yes | +| `write_logged_features` (persist logged features to offline store) | yes | + +Below is a matrix indicating which functionality is supported by `IbisRetrievalJob`. + +| | DuckDB| +| ----------------------------------------------------- | ----- | +| export to dataframe | yes | +| export to arrow table | yes | +| export to arrow batches | no | +| export to SQL | no | +| export to data lake (S3, GCS, etc.) | no | +| export to data warehouse | no | +| export as Spark dataframe | no | +| local execution of Python-based on-demand transforms | yes | +| remote execution of Python-based on-demand transforms | no | +| persist results in the offline store | yes | +| preview the query plan before execution | no | +| read partitioned data | yes | + +To compare this set of functionality against other offline stores, please see the full [functionality matrix](overview.md#functionality-matrix). diff --git a/docs/reference/offline-stores/overview.md b/docs/reference/offline-stores/overview.md index 8ce9045496..4d7681e38c 100644 --- a/docs/reference/offline-stores/overview.md +++ b/docs/reference/offline-stores/overview.md @@ -42,17 +42,17 @@ Below is a matrix indicating which offline stores support which methods. Below is a matrix indicating which `RetrievalJob`s support what functionality. -| | File | BigQuery | Snowflake | Redshift | Postgres | Spark | Trino | -| --------------------------------- | --- | --- | --- | --- | --- | --- | --- | -| export to dataframe | yes | yes | yes | yes | yes | yes | yes | -| export to arrow table | yes | yes | yes | yes | yes | yes | yes | -| export to arrow batches | no | no | no | yes | no | no | no | -| export to SQL | no | yes | yes | yes | yes | no | yes | -| export to data lake (S3, GCS, etc.) | no | no | yes | no | yes | no | no | -| export to data warehouse | no | yes | yes | yes | yes | no | no | -| export as Spark dataframe | no | no | yes | no | no | yes | no | -| local execution of Python-based on-demand transforms | yes | yes | yes | yes | yes | no | yes | -| remote execution of Python-based on-demand transforms | no | no | no | no | no | no | no | -| persist results in the offline store | yes | yes | yes | yes | yes | yes | no | -| preview the query plan before execution | yes | yes | yes | yes | yes | yes | yes | -| read partitioned data | yes | yes | yes | yes | yes | yes | yes | +| | File | BigQuery | Snowflake | Redshift | Postgres | Spark | Trino | DuckDB | +| --------------------------------- | --- | --- | --- | --- | --- | --- | --- | --- | +| export to dataframe | yes | yes | yes | yes | yes | yes | yes | yes | +| export to arrow table | yes | yes | yes | yes | yes | yes | yes | yes | +| export to arrow batches | no | no | no | yes | no | no | no | no | +| export to SQL | no | yes | yes | yes | yes | no | yes | no | +| export to data lake (S3, GCS, etc.) | no | no | yes | no | yes | no | no | no | +| export to data warehouse | no | yes | yes | yes | yes | no | no | no | +| export as Spark dataframe | no | no | yes | no | no | yes | no | no | +| local execution of Python-based on-demand transforms | yes | yes | yes | yes | yes | no | yes | yes | +| remote execution of Python-based on-demand transforms | no | no | no | no | no | no | no | no | +| persist results in the offline store | yes | yes | yes | yes | yes | yes | no | yes | +| preview the query plan before execution | yes | yes | yes | yes | yes | yes | yes | no | +| read partitioned data | yes | yes | yes | yes | yes | yes | yes | yes |