Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update docs for upcoming 0.2.0 release #158

Merged
merged 15 commits into from
May 17, 2022
Prev Previous commit
Next Next commit
Clarify note about dtensor
  • Loading branch information
mattdangerw committed May 16, 2022
commit fb263a91b9601b9f941f3b109ace6836e8306889
11 changes: 7 additions & 4 deletions ROADMAP.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,10 +106,13 @@ example demonstrating the component in an end-to-end architecture.

By the end of 2022, we should have a actively growing collection of examples
models, with a standardized set of training scripts, that match expected
performance as reported in publications. On the scalability front, we should
be running our training scripts on multi-worker GPU and TPU settings, using
[DTensor](https://www.tensorflow.org/guide/dtensor_overview) for data parallel
training.
performance as reported in publications.

On the scalability front, we should have at least one example demonstrating both
data parallel and model parallel training, in a multi-worker GPU and TPU
setting, leveraging
[DTensor](https://www.tensorflow.org/guide/dtensor_overview) for distributed
support.

### Tools for data preprocessing and postprocessing for end-to-end workflows

Expand Down