* **Inference costs.** Once you have a trained model, you'll use that model to "make inferences", the practioner's fancy way of saying "using a trained model to make predictions". Here, you might need to be careful with CPU/GPU usage (battery consumption) or have only a limited amount of memory. Different algorithms are hungrier for power and memory than others, as [this handy analysis](https://arxiv.org/pdf/1605.07678.pdf) by Alfredo Canziani, Eugenio Culurciello (Purdue University), and Adam Paszke (University of Warsaw) shows. This graph shows the number of operations each system (one of the colored bubbles) requires to reach a certain accuracy on a specific image recognition test in ImageNet, the definitive image-recongition test set.
0 commit comments