-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Let's use this wiki to keep a reading list of interesting papers. You can edit it
The Sequence to Sequence Model
Attention Model and Conversation Model
Memory Network and [End-to-End Memory Network] (http://arxiv.org/pdf/1503.08895v5.pdf)
NLP with Distributed Representation
NLP from scratch - deeptext model
AI Go Introduction in Chinese,Google AlphaGo, FB Darkforest
The Atari and RL paper and its Nature paper, [Google Atari RL Architecture] (http://www.iclr.cc/lib/exe/fetch.php?media=iclr2015:silver-iclr2015.pdf),and the paper Google distributed RL paper,
Baidu's DNN based speech recognition system
ImageNet 2015 winning solution, Deep Residual Learning
Clipping & Regularizer to alleviate gradient exploding & vanishing
A good introduction of LSTM and its variants
Colah's Blog The author has a way to explain neural network concepts via a clear way. Specially, i like the way he described LSTM
Distributed Training with SSP by Xing's group
The original CNN paper. Although this paper was written at 1998, it is still great to read the first two sections of the paper today.
A list of deep learning papers, a little bit old
####* Deep Learning Framework comparison By Bartvm
##* Compression of deep learning [Binary Connect] (http://arxiv.org/pdf/1511.00363v2.pdf) Binarize weights and quantize (hidden or raw) inputs to save multiplication.
[BinaryNet] (http://arxiv.org/pdf/1602.02830v2.pdf) Binary weights and activation
[1-bit compression] (http://research.microsoft.com/pubs/230137/IS140694.PDF) good for dense data like speech but doubt for sparse data like text ##* Embedding Glove in standford NLP
##* Linear Model
[FTRL: Follow the Regularizered Leader] (http://arxiv.org/pdf/1403.3465v3.pdf), Google's LR using FTRL
##* Recommendation An extensive study by Xavier
##* Others Bayesian Program Learning