Any university obtaining innovative results in utilizing TF2 on AI research can be offered a new and free FPGA Acceleration Card as long as it is committed that the code will be uploaded to TF2 Github repository. Additionally, any company which takes part in this program can get a special discount on FPGA Acceleration Card.
参与高校将TF2应用于AI科研或应用并取得创新成果,并承诺将优化后代码反馈回TF2开源社区,即可获赠最新FPGA加速卡。同时,参与该计划的商业用户也可获得较大程度的FPGA加速卡购买优惠。
Utilize TF2 for academic or industrial purposes.
将TF2应用于科研项目或实际场景
Focus on Deep Learning, including but not limited to:
须为深度学习相关领域,包括但不限于:
- FPGA implementation on inference of CNN, LSTM, RNN, Transformer, MLP, etc.
CNN、LSTM、RNN、Transformer、MLP等神经网络推理的FPGA实现
- Optimization for Deep Learning Computing: Model compression, model pruning, low/mixed precision computing, etc.
深度学习优化算法,如模型压缩、裁剪、低精度/混合精度计算等
- Prepossessing/Post-processing algorithm
应用场景中的预处理、后处理等算法
- FPGA inference: The computing efficiency should be more than 70% of the hardware maximum limit.
FPGA推理实现:计算效率须达到硬件峰值的70%以上
- Optimization: It should have better performance than open source algorithms.
优化算法:须优于目前已开源的算法
- Prepossessing/Post-processing: It should have better performance than open source algorithms.
预处理/后处理:须优于目前已开源的算法
- The code should be uploaded within one year after application.
申请者须在一年以内反馈代码