Skip to content

Commit 428fb1a

Browse files
authored
Update vision.md
1 parent 356638b commit 428fb1a

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/docs/guides/vision.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ The good news is that all of these things can be addressed by how your code uses
3333
## A Word on Efficiency
3434

3535
When you are implementing a vision recongition system (or most any machine learning-based software system), you need to be aware of two costs:
36-
* **Training costs.** Iterating on training in order to increase model performance is a time-consuming and expensive process called [hyperparameter optimization](https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)#Optimization). How much will it cost you to train a model, and what kind of accuracy can you get for a given amount of training? This type of training consumes lots of CPU (and possibly GPU), so you need to keep an eye on your [Amazon bill](https://aws.amazon.com/blogs/ai/fast-cnn-tuning-with-aws-gpu-instances-and-sigopt/).
36+
* **Training costs.** Iterating on over different configuration parameters in order to increase model performance is a time-consuming and expensive process called [hyperparameter optimization](https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)#Optimization). How much will it cost you to train a model, and what kind of accuracy can you get for a given amount of training? This type of training consumes lots of CPU (and possibly [GPU](https://devblogs.nvidia.com/parallelforall/sigopt-deep-learning-hyperparameter-optimization/)), so you need to keep an eye on your [Amazon bill](https://aws.amazon.com/blogs/ai/fast-cnn-tuning-with-aws-gpu-instances-and-sigopt/).
3737
* **Inference costs.** Once you have a trained model, you'll use that model to "make inferences", the practioner's fancy way of saying "using a trained model to make predictions". Here, you might need to be careful with CPU/GPU usage (battery consumption) or have only a limited amount of memory. Different algorithms are hungrier for power and memory than others, as [this handy analysis](https://arxiv.org/pdf/1605.07678.pdf) by Alfredo Canziani, Eugenio Culurciello (Purdue University), and Adam Paszke (University of Warsaw) shows. This graph shows the number of operations each system (one of the colored bubbles) requires to reach a certain accuracy on a specific image recognition test in ImageNet, the definitive image-recongition test set.
3838
![How systems compare on number of operations required for a given accuracy level](/images/efficiency.png "How systems compare on number of operations required for a given accuracy level")
3939

0 commit comments

Comments
 (0)