Skip to content

Commit

Permalink
readme update
Browse files Browse the repository at this point in the history
  • Loading branch information
gaohuang authored Sep 21, 2016
1 parent aeb0531 commit bda6877
Showing 1 changed file with 23 additions and 1 deletion.
24 changes: 23 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,24 @@
# DenseNet_lite
A more memory efficient Torch implementation of "Densely Connected Convolutional Networks".
A more memory efficient (reduces ~25% of memory during training) Torch implementation of "Densely Connected Convolutional Networks".

This implements the DenseNet architecture introduced in [Densely Connected Convolutional Network](http://arxiv.org/abs/1608.06993).The original Torch implementation can be found at https://github.com/liuzhuang13/DenseNet, and please find more details about DenseNet there. The only difference here is that we write a customed container "DenseLayer.lua" to implement the dense connections in a more memory efficient way. This leads to ~25% reduction in memory consumption during training. The training time is amost the same.

0. Install Torch ResNet (https://github.com/facebook/fb.resnet.torch) following the instructions there. To reduce memory consumption, we recommend to install the [optnet](https://github.com/fmassa/optimize-net) package.
1. Add the files ```densenet_lite.lua``` and ```DenseLayer.lua``` to the folder models/.
2. Change the learning rate schedule at function learningRate() in ```train.lua``` (line 171/173),
from

```decay = epoch >= 122 and 2 or epoch >= 81 and 1 or 0```

to

```decay = epoch >= 225 and 2 or epoch >= 150 and 1 or 0 ```

3. Train a DenseNet (L=40, k=12) on CIFAR-10+ using

```
th main.lua -netType densenet_lite -depth 40 -dataset cifar10 -batchSize 64 -nEpochs 300 -optnet true
```



0 comments on commit bda6877

Please sign in to comment.