The goal of the project was to implement a Stochastic Gradient Descent (SGD) algorithm with mini-batches in CUDA, in order to apply concepts of GPU programming seen in course.
The implementation was carried out on Google Colab, the whole code can be viewed in the attached notebook. An abstraction for matrix objects was used, called fmatrix in the program, to make the manipulation of matrices easier.
The dataset used to validate the implementation is the California Housing Data Set Description, which is also provided by default in Google Colab notebooks (in the sample_data folder).
The SGEMM (Single precision GEneral Matrix Multiply) function of the cuBLAS library was used for more efficient matrix multiplication operations. Several experiments were ran to check the impact of hyperparameters (especially batch size, learning rate and number of epochs) and batch shuffling on the accuracy and speed of the algorithm.
-
Notifications
You must be signed in to change notification settings - Fork 0
aillaud/CUDA-Gradient-Descent
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Implementation of a linear classifier on the Google California Housing Data Set in CUDA
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published