-
Notifications
You must be signed in to change notification settings - Fork 1
TFRT
Ashok Bhat edited this page May 25, 2020
·
23 revisions
- New runtime that will replace the existing TensorFlow runtime.
- Responsible for efficient execution of kernels – low-level device-specific primitives – on targeted hardware.
- Efficient use of multithreaded host CPUs
- Supports fully asynchronous programming models
- Low-level efficiency
- Help Hardware makers integrate edge and datacenter devices into TensorFlow in a modular way.
- TFRT utilizes MLIR’s compiler infrastructure to generate an optimized, target-specific representation of your computational graph that the runtime executes.
- TFRT uses MLIR’s extensible type system to support arbitrary C++ types in the runtime, which removes tensor-specific limitations.
- What is TFRT?
- When is TFRT going to be the default runtime?
- What are the main problems being solved by TFRT?
- What are the main use-cases for TFRT?
- What does TFRT rely on for CPU kernels?
- What is the relation of TFRT and MLIR?
- What is the relation between TFRT and TensorRT?
- What is the relation between TFRT and OneDNN?
- What is the relation between TFRT and TensorFlow Lite?
- TensorFlow
- TFRT: A new TensorFlow runtime - https://blog.tensorflow.org/2020/04/tfrt-new-tensorflow-runtime.html
- TVM
- MLIR
- OneDNN | Eigen