forked from microsoft/onnxruntime
-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathcommunity.html
292 lines (283 loc) · 29.4 KB
/
community.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
<!DOCTYPE html>
<html lang="en">
<head>
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-156955408-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-156955408-1');
</script>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<title>ONNX Runtime | Community</title>
<link rel="icon" href="./images/ONNXRuntime-Favicon.png" type="image/gif" sizes="16x16">
<link rel="stylesheet" href="css/fonts.css">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
<link rel="stylesheet" href="css/custom.css">
<link rel="stylesheet" href="css/responsive.css">
</head>
<body>
<a class="skip-main" href="#skipMain">Skip to main content</a>
<div class="main-wrapper">
<div class="top-banner-bg">
<!-- Partial header.html Start-->
<div w3-include-html="header.html"></div>
<!-- Partial header.html End-->
<div id="skipMain" role="main">
<div class="outer-container mx-auto py-5">
<section class="blue-title-columns py-md-5 pb-4 pt-4 mt-5">
<div class="container-fluid">
<h1 class="mb-3 blue-text">Organizations and products using ONNX Runtime</h1>
<div id = "adobe" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/adobe-logo.png" alt="Adobe logo">
</div>
<div class="col-12 col-md-9 quote">“With ONNX Runtime, Adobe Target got flexibility and standardization in one package: flexibility for our customers to train ML models in the frameworks of their choice, and standardization to robustly deploy those models at scale for fast inference, to deliver true, real-time personalized experiences.”<br/><br/>
<div class="quote-attribution text-right">–Georgiana Copil, Senior Computer Scientist, Adobe</div>
</div>
</div>
<div id = "amd" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/amd-logo.png" alt="AMD logo">
</div>
<div class="col-12 col-md-9 quote">“The ONNX Runtime integration with AMD’s ROCm open software ecosystem helps our customers leverage the power of AMD Instinct GPUs to accelerate and scale their large machine learning models with flexibility across multiple frameworks.”<br/><br/>
<div class="quote-attribution text-right">–Andrew Dieckmann, Corporate Vice President and General Manager, AMD Data Center GPU & Accelerated Processing</div>
</div>
</div>
<div id = "ant-group" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/antgroup-logo.png" alt="Ant Group logo">
</div>
<div class="col-12 col-md-9 quote">“Using ONNX Runtime, we have improved the inference performance of many computer vision (CV) and natural language processing (NLP) models trained by multiple deep learning frameworks. These are part of the Alipay production system. We plan to use ONNX Runtime as the high-performance inference backend for more deep learning models in broad applications, such as click-through rate prediction and cross-modal prediction.”<br/><br/>
<div class="quote-attribution text-right">–Xiaoming Zhang, Head of Inference Team, Ant Group</div>
</div>
</div>
<div id = "atlas-experiment" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/ATLAS-logo.png" alt="Atlas Experiment logo">
</div>
<div class="col-12 col-md-9 quote">“At CERN in the ATLAS experiment, we have integrated the C++ API of ONNX Runtime into our software framework: Athena. We are currently performing inferences using ONNX models especially in the reconstruction of electrons and muons. We are benefiting from its C++ compatibility, platform*-to-ONNX converters (* Keras, TensorFlow, PyTorch, etc) and its thread safety.”<br/><br/>
<div class="quote-attribution text-right">–ATLAS Experiment team, CERN (European Organization for Nuclear Research)</div>
</div>
</div>
<div id = "bazaarvoice" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/bazaarvoice-logo.png" alt="Bazaarvoice logo">
</div>
<div class="col-12 col-md-9 quote">“Building and deploying AI solutions to the cloud at scale is complex. With massive datasets and performance considerations, finding a harmonious balance is crucial. ONNX Runtime provided us with the flexibility to package a scikit-learn model built with Python, deploy it serverlessly to a Node.js environment, and run it in the cloud with impressive performance.”<br/><br/>
<div class="quote-attribution text-right">–Matthew Leyburn, Software Engineer, Bazaarvoice</div>
</div>
</div>
<div id = "clearblade" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/clearblade-logo.png" alt="Clearblade logo">
</div>
<div class="col-12 col-md-9 quote">“ClearBlade’s integration of ONNX Runtime with our Enterprise loT and Edge Platforms enables customers and partners to build Al models using any industry Al tool they want to use. Using this solution, our customers can use the ONNX Runtime Go language APIs to seamlessly deploy any model to
run on equipment in remote locations or on the factory floor!”<br/><br/>
<div class="quote-attribution text-right">–Aaron Allsbrook, CTO & Founder, ClearBlade</div>
</div>
</div>
<div id = "ghostwriter-ai" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/ghostwriter-logo.png" alt="Ghostwriter AI logo">
</div>
<div class="col-12 col-md-9 quote">“At GhostWriter.AI, we integrate NLP models in different international markets and regulated industries. Our customers use many technology stacks and frameworks, which change over time. With ONNX Runtime, we can provide maximum performance combined with the total flexibility of making inferences using the technology our customers prefer, from Python to C#, deploying where they choose, from cloud to embedded systems.”<br/><br/>
<div class="quote-attribution text-right">–Mauro Bennici, CTO, Ghostwriter.AI</div>
</div>
</div>
<div id = "hugging-face" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/huggingface-logo.png" alt="Hugging Face logo">
</div>
<div class="col-12 col-md-9 quote">“We use ONNX Runtime to easily deploy thousands of open-source state-of-the-art models in the Hugging Face model hub and accelerate private models for customers of the Accelerated Inference API on CPU and GPU.”<br/><br/>
<div class="quote-attribution text-right">–Morgan Funtowicz, Machine Learning Engineer, Hugging Face</div>
</div>
</div>
<div id = "hypefactors" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/hypefactors-logo.png" alt="Hypefactors logo">
</div>
<div class="col-12 col-md-9 quote">“ONNX Runtime powers many of our Natural Language Processing (NLP) and Computer Vision (CV) models that crunch the global media landscape in real-time. It is our go-to framework for scaling our production workload, providing important features ranging from built-in quantization tools to easy GPU and VNNI acceleration.”<br/><br/>
<div class="quote-attribution text-right">–Viet Yen Nguyen, CTO, Hypefactors</div>
</div>
</div>
<div id = "infarm" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/infarm-logo.png" alt="InFarm logo">
</div>
<div class="col-12 col-md-9 quote">“InFarm delivers machine-learning powered solutions for intelligent farming, running computer vision models on a variety of hardware, including on-premise GPU clusters, edge computing devices like NVIDIA Jetsons, and cloud-based CPU and GPU clusters. ONNX Runtime enables InFarm to standardise the model formats and outputs of models generated across multiple teams to simplify deployment while also providing the best performance on all hardware targets.”<br/><br/>
<div class="quote-attribution text-right">–Ashley Walker, Chief Information and Technology Officer, InFarm</div>
</div>
</div>
<div id = "intel" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/intel-logo.png" alt="Intel logo">
</div>
<div class="col-12 col-md-9 quote">“We are excited to support ONNX Runtime on the Intel® Distribution of OpenVINO™. This accelerates machine learning inference across Intel hardware and gives developers the flexibility to choose the combination of Intel hardware that best meets their needs from CPU to VPU or FPGA.”<br/><br/>
<div class="quote-attribution text-right">–Jonathan Ballon, Vice President and General Manager, Intel Internet of Things Group</div>
</div>
</div>
<div id = "visual-studio" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/visual-studio-logo.png" alt="Visual Studio logo">
</div>
<div class="col-12 col-md-9 quote">“We use ONNX Runtime to accelerate model training for a 300M+ parameters model that powers code autocompletion in Visual Studio IntelliCode.”<br/><br/>
<div class="quote-attribution text-right">–Neel Sundaresan, Director SW Engineering, Data & AI, Developer Division, Microsoft</div>
</div>
</div>
<div id = "navitaire" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/navitaire-amadeus-logo.png" alt="Navitaire Amadeus logo">
</div>
<div class="col-12 col-md-9 quote">“With customers around the globe, we’re seeing increased interest in deploying more effective models to power pricing solutions via ONNX Runtime. ONNX Runtime’s performance has given us the confidence to use this solution with our customers with more extreme transaction volume requirements.”<br/><br/>
<div class="quote-attribution text-right">–Jason Coverston, Product Director, Navitaire</div>
</div>
</div>
<div id = "nvidia" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/nvidia.png" alt="NVIDIA logo">
</div>
<div class="col-12 col-md-9 quote">“ONNX Runtime enables our customers to easily apply NVIDIA TensorRT’s powerful optimizations to machine learning models, irrespective of the training framework, and deploy across NVIDIA GPUs and edge devices.”<br/><br/>
<div class="quote-attribution text-right">– Kari Ann Briski, Sr. Director, Accelerated Computing Software and AI Product, NVIDIA</div>
</div>
</div>
<div id = "openNLP" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/opennlp-logo.png" alt="Apache OpenNLP logo">
</div>
<div class="col-12 col-md-9 quote">“The integration of ONNX Runtime into Apache OpenNLP 2.0 enables easy use of state-of-the-art Natural Language Processing (NLP) models in the Java ecosystem. For libraries and applications already using OpenNLP, such as Apache Lucene and Apache Solr, using ONNX Runtime via OpenNLP provides exciting new possibilities.”<br/><br/>
<div class="quote-attribution text-right">–Jeff Zemerick, Search Relevance Engineer at OpenSource Connections and Chair of the Apache OpenNLP project</div>
</div>
</div>
<div id = "oracle" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/oracle-logo.png" alt="Oracle logo">
</div>
<div class="col-12 col-md-9 quote">“The ONNX Runtime API for Java enables Java developers and Oracle customers to seamlessly consume and execute ONNX machine-learning models, while taking advantage of the expressive power, high performance, and scalability of Java.”<br/><br/>
<div class="quote-attribution text-right">–Stephen Green, Director of Machine Learning Research Group, Oracle</div>
</div>
</div>
<div id = "peakspeed" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/PeakSpeed_logo.png" alt="Peakspeed logo">
</div>
<div class="col-12 col-md-9 quote">“Using a common model and code base, the ONNX Runtime allows Peakspeed to easily flip between platforms to help our customers choose the most cost-effective solution based on their infrastructure and requirements.”<br/><br/>
<div class="quote-attribution text-right">–Oscar Kramer, Chief Geospatial Scientist, Peakspeed</div>
</div>
</div>
<div id = "ptw" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/ptw-logo.png" alt="PTW logo">
</div>
<div class="col-12 col-md-9 quote">“The mission of PTW is to guarantee radiation therapy safely. Bringing an AI model from research into the clinic can be a challenge, however. These are very different software and hardware environments. ONNX Runtime bridges the gap and allows us to choose the best possible tools for research and be sure deployment into any environment will just work.”<br/><br/>
<div class="quote-attribution text-right">–Jan Weidner, Research Software Engineer, PTW Dosimetry</div>
</div>
</div>
<div id = "redis" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/redis-logo.png" alt="Redis logo">
</div>
<div class="col-12 col-md-9 quote">“ONNX Runtime underpins RedisAI's distinctive capability to run machine-learning and deep-learning model inference seamlessly inside of Redis. This integration allows data scientists to train models in their preferred ML framework (PyTorch, TensorFlow, etc), and serve those models from Redis for low-latency inference.”<br/><br/>
<div class="quote-attribution text-right">–Sam Partee, Principal Engineer, Applied AI, Redis</div>
</div>
</div>
<div id = "rockchip" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/Rockchip-logo.png" alt="Rockchip logo">
</div>
<div class="col-12 col-md-9 quote">“With support for ONNX Runtime, our customers and developers can cross the boundaries of the model training framework, easily deploy ML models in Rockchip NPU powered devices.”<br/><br/>
<div class="quote-attribution text-right">–Feng Chen, Senior Vice President, Rockchip</div>
</div>
</div>
<div id = "samtec" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/samtec-logo.png" alt="Samtec logo">
</div>
<div class="col-12 col-md-9 quote">“We needed a runtime engine to handle the transition from data science land to a high-performance production runtime system. ONNX Runtime (ORT) simply ‘just worked’. Having no previous experience with ORT, I was able to easily convert my models, and had prototypes running inference in multiple languages within just a few hours. ORT will be my go-to runtime engine for the foreseeable future.”<br/><br/>
<div class="quote-attribution text-right">–Bill McCrary, Application Architect, Samtec</div>
</div>
</div>
<div id = "sas" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/sas-logo.png" alt="SAS logo">
</div>
<div class="col-12 col-md-9 quote">“The unique combination of ONNX Runtime and SAS Event Stream Processing changes the game for developers and systems integrators by supporting flexible pipelines and enabling them to target multiple hardware platforms for the same AI models without bundling and packaging changes. This is crucial considering the additional build and test effort saved on an ongoing basis.”<br/><br/>
<div class="quote-attribution text-right">–Saurabh Mishra, Senior Manager, Product Management, Internet of Things, SAS</div>
</div>
</div>
<div id = "teradata" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/teradata-logo.png" alt="Teradata logo">
</div>
<div class="col-12 col-md-9 quote">“Teradata provides a highly extensible framework that enables importation and inference of previously trained Machine Learning (ML) and Deep Learning (DL) models. ONNX Runtime enables us to expand the capabilities of Vantage Bring Your Own Model (BYOM) and gives data scientists more options for ML and DL models integration, inference and production deployment within Teradata Vantage ecosystem.”<br/><br/>
<div class="quote-attribution text-right">–Michael Riordan, Director, Vantage Data Science and Analytics Products, Teradata</div>
</div>
</div>
<div id = "topaz-labs" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/topazlabs-logo.png" alt="Topaz Labs logo">
</div>
<div class="col-12 col-md-9 quote">“ONNX Runtime’s simple C API with DirectML provider enabled Topaz Labs to add support for AMD GPUs and NVIDIA Tensor Cores in just a couple of days. Furthermore, our models load many times faster on GPU than any other frameworks. Even our larger models with about 100 million parameters load within seconds.”<br/><br/>
<div class="quote-attribution text-right">–Suraj Raghuraman, Head of AI Engine, Topaz Labs</div>
</div>
</div>
<div id = "unrealengine" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/ue-logo.png" alt="Unreal Engine logo">
</div>
<div class="col-12 col-md-9 quote">“We selected ONNX Runtime as the backend of Unreal Engine’s Neural Network Interface (NNI) plugin inference system because of its extensibility to support the platforms that Unreal Engine runs on, while enabling ML practitioners to develop ML models in the frameworks of their choice. NNI evaluates neural networks in real time in Unreal Engine and acts as the foundation for game developers to use and deploy ML models to solve many development challenges, including animation, ML-based AI, camera tracking, and more.”<br/><br/>
<div class="quote-attribution text-right">–Francisco Vicente Carrasco, Research Engineering Lead, Epic Games</div>
</div>
</div>
<div id = "usda" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/usda-logo.png" alt="USDA logo">
</div>
<div class="col-12 col-md-9 quote">“At the USDA we use ONNX Runtime in GuideMaker, a program we developed to design pools of guide RNAs needed for large-scale gene editing experiments with CRISPR-Cas. ONNX allowed us to make an existing model more interoperable and ONNX Runtime speeds up predictions of guide RNA binding.”<br/><br/>
<div class="quote-attribution text-right">–Adam Rivers, Computational Biologist, United States Department of Agriculture, Agricultural Research Service</div>
</div>
</div>
<div id = "vespa" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/vespa-logo.png" alt="Vespa logo">
</div>
<div class="col-12 col-md-9 quote">“ONNX Runtime has vastly increased Vespa.ai’s capacity for evaluating large models, both in performance and model types we support.”<br/><br/>
<div class="quote-attribution text-right">–Lester Solbakken, Principal Engineer, Vespa.ai, Verizon Media</div>
</div>
</div>
<div id = "writer" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/writer-logo.png" alt="Writer logo">
</div>
<div class="col-12 col-md-9 quote">“We're big fans of ONNX Runtime at Writer. ONNX Runtime allows us to run inference on CPUs for models that we would otherwise have to use GPUs for, and allows us to run models on GPUs that would otherwise be too slow to meet our SLAs. The end result is that we can deliver better quality outputs to our users, and at lower costs.”<br/><br/>
<div class="quote-attribution text-right">–Sam Havens, Director of NLP Engineering, Writer</div>
</div>
</div>
<div id = "xilinx" class="quotebox row col-12 col-md-12 mb-4">
<div class="customer-logo col-12 col-md-3 text-center">
<img src="./images/logos/xilinx-logo.png" alt="Xilinx logo">
</div>
<div class="col-12 col-md-9 quote">“Xilinx is excited that Microsoft has announced Vitis™ AI interoperability and runtime support for ONNX Runtime, enabling developers to deploy machine learning models for inference to FPGA IaaS such as Azure NP series VMs and Xilinx edge devices.”<br/><br/>
<div class="quote-attribution text-right">–Sudip Nag, Corporate Vice President, Software & AI Products, Xilinx</div>
</div>
</div>
</div>
</section>
</div>
</div>
</div>
</div>
<!-- Partial footer.html Start-->
<div w3-include-html="footer.html"></div>
<!-- Partial footer.html End-->
<a id="back-to-top" href="JavaScript:void(0);" class="btn btn-lg back-to-top" role="button" aria-label="Back to top"><span class="fa fa-angle-up"></span></a>
<script src="https://www.w3schools.com/lib/w3.js"></script>
<script>w3.includeHTML();</script>
<script src="https://code.jquery.com/jquery-3.4.1.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js"></script>
<script src="./js/custom.js"></script>
</body>
</html>