Skip to content

Getting Started

kma-code edited this page Jul 5, 2021 · 13 revisions

Terminology and motivation for the experiments

These repos implement standard experiments for various algorithms using standard tests (MNIST, CIFAR, ...):

  • vbp: vanilla backprop, i.e. the standard backpropagation algorithm
  • fa: feedback alignment
  • dyn_pseudo: dynamical pseudobackpropagation with learning of the backward weights
  • pseudo_backprop: backprop without learning of backward weights, but instead setting them to the pseudoinverse of the forward weights
  • gen_pseudo: generalized pseudobackprop. Same as pseudo_backprop, but using the data-specific pseudoinverse

The last two classes are not efficient, as calculation of the pseudoinverse is a time-consuming calculation. They are only implemented in order to compare with dyn_pseudo.

Installation

The project consists of two git repositories: pseudo backprop experiments

For the installation pull and install the packages locally with pip:

git clone [email protected]:unibe-cns/pseudoBackprop.git
cd pseudoBackprop
pip install .

and similarly

git clone [email protected]:unibe-cns/exp_pseudo_backprop.git
cd exp_pseudo_backprop
pip install .

Tests

For checking the installation, run the tests. In pseudoBackprop:

cd pseudoBackprop
nosetests

The exp_pseudo_backprop repository does not have (yet) any tests

Installation on the CSCS cluster

Starting from a fresh bash. First install the Pytorch on the CSCS cluster following the tutorial. Set up a new environment and activate it:

python -m venv --system-site-packages ~/venv/pbp
source ~/venv/pbp/bin/activate

Then clone and install the repositories as above:

git clone [email protected]:unibe-cns/pseudoBackprop.git
cd pseudoBackprop
pip install .

and similarly

git clone [email protected]:unibe-cns/exp_pseudo_backprop.git
cd exp_pseudo_backprop
pip install .

Running the first experiments

Single shot experiments

A single shot experiment running vanialla backpropagation, feedback alignment, pseudo-backpropagation and generalized pseudo backpropagation is found in the pseudoBackprop projects. To run it:

cd pseudoBackprop/examples/single_shot
python run_complete.py

The run will train and test the four cases on the mnist dataset. It will produce four folders with data: model_bp, model_fa, model_pseudo and model_gen_pseudo, further several log files, finally a results.png file containing a comparison of the loss and learning rates during learning.

Several repetitions

In the exp_pseudo_backprop project there is an example for making several repetitions of the same experiment, and functionalities to execute it on the own machine or using slurm scripts. For the example go to:

cd exp_pseudo_backprop/examples/repetitions

In the folder the readme.md contains the instructions for running the experiments. On a cluster the stages should be carried out on the frontend, the single jobs are then sent to the nodes. In the case of a single machine, the single jobs are spwande as subprocesses. The stages are separated from each other to simplify testing. Setup creates the foldert structure. This is fast and only uses the frontend:

python -m exp_pseudo_backprop.multiple_exps --command setup 

Run the experiments carries out the simulations. This sends the jobs to the nodes and it can take a long time:

python -m exp_pseudo_backprop.multiple_exps --command run

Gather unifies the results from the single jobs into a more convenient format for plotting and transfer to other media:

python -m exp_pseudo_backprop.multiple_exps --command gather

Plot creates the report plots:

python -m exp_pseudo_backprop.multiple_exps --command plot

The simulations expect a master folder and a parameter files in the master folder: params_fa.json, params_gen_pseudo.json, params_pseudo_backprop.json and params_vbp.json. The exact names are expected. Further an exp_control.json file is required in the master folder that describes the metaparameters of the parametersweep. Carrying out all the experiments will produce the results res_bp.npz, res_fa.npz, res_gen_pseudo.npz and res_pseudo.npz. Further, it produces the output plots exp_results_median.png and exp_results_lines.png.

For more information on the functionalities see User Manual.

Running a experiments on the CSCS

After installation, go to the example:

cd exp_pseudo_backprop/examples/cscs_job

The simulation can be run analog as directly on the computer: Setup creates the foldert structure. This is fast and only uses the frontend:

python -m exp_pseudo_backprop.multiple_exps --command setup 

Run the experiments carries out the simulations. This sends the jobs to the nodes and it can take a long time:

python -m exp_pseudo_backprop.multiple_exps --command run

Gather unifies the results from the single jobs into a more convenient format for plotting and transfer to other media:

python -m exp_pseudo_backprop.multiple_exps --command gather

Plot creates the report plots:

python -m exp_pseudo_backprop.multiple_exps --command plot