Skip to content

Commit

Permalink
docs: Add DuckDB offline store (#4174)
Browse files Browse the repository at this point in the history
  • Loading branch information
tokoko authored May 4, 2024
1 parent ee51fbf commit adc2939
Show file tree
Hide file tree
Showing 6 changed files with 78 additions and 21 deletions.
1 change: 1 addition & 0 deletions docs/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,7 @@
* [Snowflake](reference/offline-stores/snowflake.md)
* [BigQuery](reference/offline-stores/bigquery.md)
* [Redshift](reference/offline-stores/redshift.md)
* [DuckDB](reference/offline-stores/duckdb.md)
* [Spark (contrib)](reference/offline-stores/spark.md)
* [PostgreSQL (contrib)](reference/offline-stores/postgres.md)
* [Trino (contrib)](reference/offline-stores/trino.md)
Expand Down
6 changes: 1 addition & 5 deletions docs/reference/data-sources/file.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,7 @@
## Description

File data sources are files on disk or on S3.
Currently only Parquet files are supported.

{% hint style="warning" %}
FileSource is meant for development purposes only and is not optimized for production use.
{% endhint %}
Currently only Parquet and Delta formats are supported.

## Example

Expand Down
4 changes: 2 additions & 2 deletions docs/reference/data-sources/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

## Functionality

In Feast, each batch data source is associated with a corresponding offline store.
For example, a `SnowflakeSource` can only be processed by the Snowflake offline store.
In Feast, each batch data source is associated with corresponding offline stores.
For example, a `SnowflakeSource` can only be processed by the Snowflake offline store, while a `FileSource` can be processed by both File and DuckDB offline stores.
Otherwise, the primary difference between batch data sources is the set of supported types.
Feast has an internal type system, and aims to support eight primitive types (`bytes`, `string`, `int32`, `int64`, `float32`, `float64`, `bool`, and `timestamp`) along with the corresponding array types.
However, not every batch data source supports all of these types.
Expand Down
4 changes: 4 additions & 0 deletions docs/reference/offline-stores/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,10 @@ Please see [Offline Store](../../getting-started/architecture-and-components/off
[redshift.md](redshift.md)
{% endcontent-ref %}

{% content-ref url="duckdb.md" %}
[duckdb.md](duckdb.md)
{% endcontent-ref %}

{% content-ref url="spark.md" %}
[spark.md](spark.md)
{% endcontent-ref %}
Expand Down
56 changes: 56 additions & 0 deletions docs/reference/offline-stores/duckdb.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# DuckDB offline store

## Description

The duckdb offline store provides support for reading [FileSources](../data-sources/file.md). It can read both Parquet and Delta formats. DuckDB offline store uses [ibis](https://ibis-project.org/) under the hood to convert offline store operations to DuckDB queries.

* Entity dataframes can be provided as a Pandas dataframe.

## Getting started
In order to use this offline store, you'll need to run `pip install 'feast[duckdb]'`.

## Example

{% code title="feature_store.yaml" %}
```yaml
project: my_project
registry: data/registry.db
provider: local
offline_store:
type: duckdb
online_store:
path: data/online_store.db
```
{% endcode %}
## Functionality Matrix
The set of functionality supported by offline stores is described in detail [here](overview.md#functionality).
Below is a matrix indicating which functionality is supported by the DuckDB offline store.
| | DuckdDB |
| :----------------------------------------------------------------- | :---- |
| `get_historical_features` (point-in-time correct join) | yes |
| `pull_latest_from_table_or_query` (retrieve latest feature values) | yes |
| `pull_all_from_table_or_query` (retrieve a saved dataset) | yes |
| `offline_write_batch` (persist dataframes to offline store) | yes |
| `write_logged_features` (persist logged features to offline store) | yes |

Below is a matrix indicating which functionality is supported by `IbisRetrievalJob`.

| | DuckDB|
| ----------------------------------------------------- | ----- |
| export to dataframe | yes |
| export to arrow table | yes |
| export to arrow batches | no |
| export to SQL | no |
| export to data lake (S3, GCS, etc.) | no |
| export to data warehouse | no |
| export as Spark dataframe | no |
| local execution of Python-based on-demand transforms | yes |
| remote execution of Python-based on-demand transforms | no |
| persist results in the offline store | yes |
| preview the query plan before execution | no |
| read partitioned data | yes |

To compare this set of functionality against other offline stores, please see the full [functionality matrix](overview.md#functionality-matrix).
28 changes: 14 additions & 14 deletions docs/reference/offline-stores/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,17 +42,17 @@ Below is a matrix indicating which offline stores support which methods.

Below is a matrix indicating which `RetrievalJob`s support what functionality.

| | File | BigQuery | Snowflake | Redshift | Postgres | Spark | Trino |
| --------------------------------- | --- | --- | --- | --- | --- | --- | --- |
| export to dataframe | yes | yes | yes | yes | yes | yes | yes |
| export to arrow table | yes | yes | yes | yes | yes | yes | yes |
| export to arrow batches | no | no | no | yes | no | no | no |
| export to SQL | no | yes | yes | yes | yes | no | yes |
| export to data lake (S3, GCS, etc.) | no | no | yes | no | yes | no | no |
| export to data warehouse | no | yes | yes | yes | yes | no | no |
| export as Spark dataframe | no | no | yes | no | no | yes | no |
| local execution of Python-based on-demand transforms | yes | yes | yes | yes | yes | no | yes |
| remote execution of Python-based on-demand transforms | no | no | no | no | no | no | no |
| persist results in the offline store | yes | yes | yes | yes | yes | yes | no |
| preview the query plan before execution | yes | yes | yes | yes | yes | yes | yes |
| read partitioned data | yes | yes | yes | yes | yes | yes | yes |
| | File | BigQuery | Snowflake | Redshift | Postgres | Spark | Trino | DuckDB |
| --------------------------------- | --- | --- | --- | --- | --- | --- | --- | --- |
| export to dataframe | yes | yes | yes | yes | yes | yes | yes | yes |
| export to arrow table | yes | yes | yes | yes | yes | yes | yes | yes |
| export to arrow batches | no | no | no | yes | no | no | no | no |
| export to SQL | no | yes | yes | yes | yes | no | yes | no |
| export to data lake (S3, GCS, etc.) | no | no | yes | no | yes | no | no | no |
| export to data warehouse | no | yes | yes | yes | yes | no | no | no |
| export as Spark dataframe | no | no | yes | no | no | yes | no | no |
| local execution of Python-based on-demand transforms | yes | yes | yes | yes | yes | no | yes | yes |
| remote execution of Python-based on-demand transforms | no | no | no | no | no | no | no | no |
| persist results in the offline store | yes | yes | yes | yes | yes | yes | no | yes |
| preview the query plan before execution | yes | yes | yes | yes | yes | yes | yes | no |
| read partitioned data | yes | yes | yes | yes | yes | yes | yes | yes |

0 comments on commit adc2939

Please sign in to comment.