Skip to content

Instantly share code, notes, and snippets.

@falconzyx
Forked from debasishg/gist:b4df1648d3f1776abdff
Last active August 29, 2015 14:18
Show Gist options
  • Select an option

  • Save falconzyx/dba3839dd4ea9f5b09fd to your computer and use it in GitHub Desktop.

Select an option

Save falconzyx/dba3839dd4ea9f5b09fd to your computer and use it in GitHub Desktop.

Revisions

  1. @debasishg debasishg revised this gist Apr 3, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion gistfile1.md
    Original file line number Diff line number Diff line change
    @@ -44,7 +44,7 @@
    * [Recurrent Neural Networks for Collaborative Filtering](http://erikbern.com/?p=589)

    7. *Interesting courses*
    * [CS231n: Convolutional Neural Networks for Visual Recognition](http://cs231n.github.io/) at Stanford by Andrej Karpathy
    * [CS231n: Convolutional Neural Networks for Visual Recognition](http://cs231n.stanford.edu/) at Stanford by Andrej Karpathy
    * [CS224d: Deep Learning for Natural Language Processing](http://cs224d.stanford.edu/) at Stanford by Richard Socher
    * [STA 4273H (Winter 2015): Large Scale Machine Learning](http://www.cs.toronto.edu/~rsalakhu/STA4273_2015/) at Toronto by Russ Salakhutdinov
    * [AM 207 Monte Carlo Methods, Stochastic Optimization](http://am207.org/) at Harvard by Verena Kaynig-Fittkau and Pavlos Protopapas
  2. @debasishg debasishg revised this gist Apr 3, 2015. 1 changed file with 2 additions and 1 deletion.
    3 changes: 2 additions & 1 deletion gistfile1.md
    Original file line number Diff line number Diff line change
    @@ -4,7 +4,7 @@
    * [Emergence of Object-Selective Features in Unsupervised Feature Learning](http://web.stanford.edu/~acoates/papers/coateskarpathyng_nips2012.pdf) by Coates, Ng
    * [Scaling Learning Algorithms towards AI](http://yann.lecun.com/exdb/publis/pdf/bengio-lecun-07.pdf) Benjio & LeCun

    2. *Deep Neural Nets*
    2. *Deep Learning*
    * [Dropout: A Simple Way to Prevent Neural Networks from Overfitting](http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf) by Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever and Ruslan Salakhutdinov
    * [Understanding the difficulty of training deep feedforward neural networks](http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf) by Xavier Glorot and Yoshua Bengio
    * [On the difficulty of training Recurrent Neural Networks](http://arxiv.org/pdf/1211.5063v2.pdf) by Razvan Pascanu, Tomas Mikolov and Yoshua Bengio
    @@ -16,6 +16,7 @@
    * [Efficient Backprop](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf) by LeCun, Bottou et al
    * [Towards Biologically Plausible Deep Learning](http://arxiv.org/abs/1502.04156) by Yoshua Bengio, Dong-Hyun Lee, Jorg Bornschein, Zhouhan Lin
    * [Training Recurrent Neural Networks](http://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf) Phd thesis of Ilya Sutskever
    * [A Probabilistic Theory of Deep Learning](http://arxiv.org/pdf/1504.00641v1.pdf) by Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk

    3. *Scalable Machine Learning*
    * [Bring the Noise: Embracing Randomness is the Key to Scaling Up Machine Learning Algorithms](http://online.liebertpub.com/doi/pdf/10.1089/big.2013.0010) by Brian Delssandro
  3. @debasishg debasishg revised this gist Apr 2, 2015. 1 changed file with 1 addition and 0 deletions.
    1 change: 1 addition & 0 deletions gistfile1.md
    Original file line number Diff line number Diff line change
    @@ -48,3 +48,4 @@
    * [STA 4273H (Winter 2015): Large Scale Machine Learning](http://www.cs.toronto.edu/~rsalakhu/STA4273_2015/) at Toronto by Russ Salakhutdinov
    * [AM 207 Monte Carlo Methods, Stochastic Optimization](http://am207.org/) at Harvard by Verena Kaynig-Fittkau and Pavlos Protopapas
    * [ACL 2012 + NAACL 2013 Tutorial: Deep Learning for NLP (without Magic)](http://www.socher.org/index.php/DeepLearningTutorial/DeepLearningTutorial) at NAACL 2013 by Richard Socher, Chris Manning and Yoshua Bengio
    * [Video course on Deep Learning](https://www.youtube.com/playlist?list=PL6Xpj9I5qXYEcOhn7TqghAJ6NAPrNmUBH) by Hugo Larochelle
  4. @debasishg debasishg revised this gist Apr 2, 2015. 1 changed file with 1 addition and 0 deletions.
    1 change: 1 addition & 0 deletions gistfile1.md
    Original file line number Diff line number Diff line change
    @@ -47,3 +47,4 @@
    * [CS224d: Deep Learning for Natural Language Processing](http://cs224d.stanford.edu/) at Stanford by Richard Socher
    * [STA 4273H (Winter 2015): Large Scale Machine Learning](http://www.cs.toronto.edu/~rsalakhu/STA4273_2015/) at Toronto by Russ Salakhutdinov
    * [AM 207 Monte Carlo Methods, Stochastic Optimization](http://am207.org/) at Harvard by Verena Kaynig-Fittkau and Pavlos Protopapas
    * [ACL 2012 + NAACL 2013 Tutorial: Deep Learning for NLP (without Magic)](http://www.socher.org/index.php/DeepLearningTutorial/DeepLearningTutorial) at NAACL 2013 by Richard Socher, Chris Manning and Yoshua Bengio
  5. @debasishg debasishg revised this gist Apr 2, 2015. 1 changed file with 7 additions and 1 deletion.
    8 changes: 7 additions & 1 deletion gistfile1.md
    Original file line number Diff line number Diff line change
    @@ -35,9 +35,15 @@

    6. *Interesting blog posts*
    * [Hacker's Guide to Neural Networks](https://karpathy.github.io/neuralnets/) by Andrej Karpathy
    * [Breaking Linear Classifiers on ImageNet](http://karpathy.github.io/2015/03/30/breaking-convnets/) by Andrej Karpathy
    * [Classifying plankton with Deep Neural Networks](http://benanne.github.io/2015/03/17/plankton.html)
    * [Deep stuff about deep learning?](https://blogs.princeton.edu/imabandit/2015/03/20/deep-stuff-about-deep-learning/)
    * [Understanding Convolution in Deep Learning](https://timdettmers.wordpress.com/2015/03/26/convolution-deep-learning/)
    * [A Brief Overview of Deep Learning](http://yyue.blogspot.in/2015/01/a-brief-overview-of-deep-learning.html) by Ilya Sutskever
    * [Recurrent Neural Networks for Collaborative Filtering](http://erikbern.com/?p=589)


    7. *Interesting courses*
    * [CS231n: Convolutional Neural Networks for Visual Recognition](http://cs231n.github.io/) at Stanford by Andrej Karpathy
    * [CS224d: Deep Learning for Natural Language Processing](http://cs224d.stanford.edu/) at Stanford by Richard Socher
    * [STA 4273H (Winter 2015): Large Scale Machine Learning](http://www.cs.toronto.edu/~rsalakhu/STA4273_2015/) at Toronto by Russ Salakhutdinov
    * [AM 207 Monte Carlo Methods, Stochastic Optimization](http://am207.org/) at Harvard by Verena Kaynig-Fittkau and Pavlos Protopapas
  6. @debasishg debasishg revised this gist Apr 2, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion gistfile1.md
    Original file line number Diff line number Diff line change
    @@ -32,7 +32,7 @@
    5. *Non Linear Units*
    * [Rectified Linear Units Improve Restricted Boltzmann Machines](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.165.6419&rep=rep1&type=pdf) by Nair & Hinton
    * [Mathematical Intuition for Performance of Rectified Linear Unit in Deep Neural Networks](https://www.academia.edu/7826776/Mathematical_Intuition_for_Performance_of_Rectified_Linear_Unit_in_Deep_Neural_Networks) by Alexandre Dalyec
    *

    6. *Interesting blog posts*
    * [Hacker's Guide to Neural Networks](https://karpathy.github.io/neuralnets/) by Andrej Karpathy
    * [Classifying plankton with Deep Neural Networks](http://benanne.github.io/2015/03/17/plankton.html)
  7. @debasishg debasishg revised this gist Apr 2, 2015. 1 changed file with 8 additions and 0 deletions.
    8 changes: 8 additions & 0 deletions gistfile1.md
    Original file line number Diff line number Diff line change
    @@ -32,4 +32,12 @@
    5. *Non Linear Units*
    * [Rectified Linear Units Improve Restricted Boltzmann Machines](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.165.6419&rep=rep1&type=pdf) by Nair & Hinton
    * [Mathematical Intuition for Performance of Rectified Linear Unit in Deep Neural Networks](https://www.academia.edu/7826776/Mathematical_Intuition_for_Performance_of_Rectified_Linear_Unit_in_Deep_Neural_Networks) by Alexandre Dalyec
    *
    6. *Interesting blog posts*
    * [Hacker's Guide to Neural Networks](https://karpathy.github.io/neuralnets/) by Andrej Karpathy
    * [Classifying plankton with Deep Neural Networks](http://benanne.github.io/2015/03/17/plankton.html)
    * [Deep stuff about deep learning?](https://blogs.princeton.edu/imabandit/2015/03/20/deep-stuff-about-deep-learning/)
    * [Understanding Convolution in Deep Learning](https://timdettmers.wordpress.com/2015/03/26/convolution-deep-learning/)
    * [A Brief Overview of Deep Learning](http://yyue.blogspot.in/2015/01/a-brief-overview-of-deep-learning.html) by Ilya Sutskever
    * [Recurrent Neural Networks for Collaborative Filtering](http://erikbern.com/?p=589)

  8. @debasishg debasishg revised this gist Mar 26, 2015. 1 changed file with 1 addition and 0 deletions.
    1 change: 1 addition & 0 deletions gistfile1.md
    Original file line number Diff line number Diff line change
    @@ -23,6 +23,7 @@
    * [The TradeOffs of Large Scale Learning](http://papers.nips.cc/paper/3323-the-tradeoffs-of-large-scale-learning.pdf) by Leon Bottou & Olivier Bousquet
    * [Hash Kernels for Structured Data](http://www.jmlr.org/papers/volume10/shi09a/shi09a.pdf) by Qinfeng Shi et. al.
    * [Feature Hashing for Large Scale Multitask Learning](http://arxiv.org/pdf/0902.2206.pdf) by Weinberger et. al.
    * [Large-Scale Learning with Less RAM via Randomization](http://www.eecs.tufts.edu/~dsculley/papers/round-model-icml.pdf) by a group of authors from Google

    4. *Gradient based Training*
    * [Practical Recommendations for Gradient-Based Training of Deep Architectures](http://arxiv.org/pdf/1206.5533v2.pdf) by Yoshua Bengio
  9. @debasishg debasishg revised this gist Mar 23, 2015. 1 changed file with 10 additions and 2 deletions.
    12 changes: 10 additions & 2 deletions gistfile1.md
    Original file line number Diff line number Diff line change
    @@ -10,17 +10,25 @@
    * [On the difficulty of training Recurrent Neural Networks](http://arxiv.org/pdf/1211.5063v2.pdf) by Razvan Pascanu, Tomas Mikolov and Yoshua Bengio
    * [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](http://arxiv.org/abs/1502.03167) by Sergey Ioffe and Christian Szegedy
    * [Deep Learning in Neural Networks: An Overview](http://arxiv.org/pdf/1404.7828v4.pdf) by Jurgen Schmidhuber
    * [Stochastic Gradient Descent Tricks](http://research.microsoft.com/pubs/192769/tricks-2012.pdf) by L´eon Bottou
    * [Qualitatively characterizing neural network optimization problems](http://arxiv.org/abs/1412.6544) by Ian J. Goodfellow, Oriol Vinyals
    * [On Recurrent and Deep Neural Networks](http://www-etud.iro.umontreal.ca/~pascanur/papers/thesis.pdf) Phd thesis of Razvan Pascanu
    * [Scaling Learning Algorithms towards AI](http://yann.lecun.com/exdb/publis/pdf/bengio-lecun-07.pdf) by Yann LeCun and Yoshua Benjio
    * [Efficient Backprop](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf) by LeCun, Bottou et al
    * [Towards Biologically Plausible Deep Learning](http://arxiv.org/abs/1502.04156) by Yoshua Bengio, Dong-Hyun Lee, Jorg Bornschein, Zhouhan Lin
    * [Training Recurrent Neural Networks](http://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf) Phd thesis of Ilya Sutskever

    2. *Scalable Machine Learning*
    3. *Scalable Machine Learning*
    * [Bring the Noise: Embracing Randomness is the Key to Scaling Up Machine Learning Algorithms](http://online.liebertpub.com/doi/pdf/10.1089/big.2013.0010) by Brian Delssandro
    * [Large Scale Machine Learning with Stochastic Gradient Descent](http://leon.bottou.org/publications/pdf/compstat-2010.pdf) by Leon Bottou
    * [The TradeOffs of Large Scale Learning](http://papers.nips.cc/paper/3323-the-tradeoffs-of-large-scale-learning.pdf) by Leon Bottou & Olivier Bousquet
    * [Hash Kernels for Structured Data](http://www.jmlr.org/papers/volume10/shi09a/shi09a.pdf) by Qinfeng Shi et. al.
    * [Feature Hashing for Large Scale Multitask Learning](http://arxiv.org/pdf/0902.2206.pdf) by Weinberger et. al.

    4. *Gradient based Training*
    * [Practical Recommendations for Gradient-Based Training of Deep Architectures](http://arxiv.org/pdf/1206.5533v2.pdf) by Yoshua Bengio
    * [Stochastic Gradient Descent Tricks](http://research.microsoft.com/pubs/192769/tricks-2012.pdf) by L´eon Bottou

    5. *Non Linear Units*
    * [Rectified Linear Units Improve Restricted Boltzmann Machines](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.165.6419&rep=rep1&type=pdf) by Nair & Hinton
    * [Mathematical Intuition for Performance of Rectified Linear Unit in Deep Neural Networks](https://www.academia.edu/7826776/Mathematical_Intuition_for_Performance_of_Rectified_Linear_Unit_in_Deep_Neural_Networks) by Alexandre Dalyec

  10. @debasishg debasishg revised this gist Mar 23, 2015. 1 changed file with 7 additions and 0 deletions.
    7 changes: 7 additions & 0 deletions gistfile1.md
    Original file line number Diff line number Diff line change
    @@ -17,3 +17,10 @@
    * [Efficient Backprop](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf) by LeCun, Bottou et al
    * [Towards Biologically Plausible Deep Learning](http://arxiv.org/abs/1502.04156) by Yoshua Bengio, Dong-Hyun Lee, Jorg Bornschein, Zhouhan Lin
    * [Training Recurrent Neural Networks](http://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf) Phd thesis of Ilya Sutskever

    2. *Scalable Machine Learning*
    * [Bring the Noise: Embracing Randomness is the Key to Scaling Up Machine Learning Algorithms](http://online.liebertpub.com/doi/pdf/10.1089/big.2013.0010) by Brian Delssandro
    * [Large Scale Machine Learning with Stochastic Gradient Descent](http://leon.bottou.org/publications/pdf/compstat-2010.pdf) by Leon Bottou
    * [The TradeOffs of Large Scale Learning](http://papers.nips.cc/paper/3323-the-tradeoffs-of-large-scale-learning.pdf) by Leon Bottou & Olivier Bousquet
    * [Hash Kernels for Structured Data](http://www.jmlr.org/papers/volume10/shi09a/shi09a.pdf) by Qinfeng Shi et. al.
    * [Feature Hashing for Large Scale Multitask Learning](http://arxiv.org/pdf/0902.2206.pdf) by Weinberger et. al.
  11. @debasishg debasishg revised this gist Mar 19, 2015. 1 changed file with 1 addition and 0 deletions.
    1 change: 1 addition & 0 deletions gistfile1.md
    Original file line number Diff line number Diff line change
    @@ -16,3 +16,4 @@
    * [Scaling Learning Algorithms towards AI](http://yann.lecun.com/exdb/publis/pdf/bengio-lecun-07.pdf) by Yann LeCun and Yoshua Benjio
    * [Efficient Backprop](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf) by LeCun, Bottou et al
    * [Towards Biologically Plausible Deep Learning](http://arxiv.org/abs/1502.04156) by Yoshua Bengio, Dong-Hyun Lee, Jorg Bornschein, Zhouhan Lin
    * [Training Recurrent Neural Networks](http://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf) Phd thesis of Ilya Sutskever
  12. @debasishg debasishg revised this gist Mar 1, 2015. 1 changed file with 3 additions and 0 deletions.
    3 changes: 3 additions & 0 deletions gistfile1.md
    Original file line number Diff line number Diff line change
    @@ -13,3 +13,6 @@
    * [Stochastic Gradient Descent Tricks](http://research.microsoft.com/pubs/192769/tricks-2012.pdf) by L´eon Bottou
    * [Qualitatively characterizing neural network optimization problems](http://arxiv.org/abs/1412.6544) by Ian J. Goodfellow, Oriol Vinyals
    * [On Recurrent and Deep Neural Networks](http://www-etud.iro.umontreal.ca/~pascanur/papers/thesis.pdf) Phd thesis of Razvan Pascanu
    * [Scaling Learning Algorithms towards AI](http://yann.lecun.com/exdb/publis/pdf/bengio-lecun-07.pdf) by Yann LeCun and Yoshua Benjio
    * [Efficient Backprop](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf) by LeCun, Bottou et al
    * [Towards Biologically Plausible Deep Learning](http://arxiv.org/abs/1502.04156) by Yoshua Bengio, Dong-Hyun Lee, Jorg Bornschein, Zhouhan Lin
  13. @debasishg debasishg revised this gist Feb 24, 2015. 1 changed file with 1 addition and 0 deletions.
    1 change: 1 addition & 0 deletions gistfile1.md
    Original file line number Diff line number Diff line change
    @@ -12,3 +12,4 @@
    * [Deep Learning in Neural Networks: An Overview](http://arxiv.org/pdf/1404.7828v4.pdf) by Jurgen Schmidhuber
    * [Stochastic Gradient Descent Tricks](http://research.microsoft.com/pubs/192769/tricks-2012.pdf) by L´eon Bottou
    * [Qualitatively characterizing neural network optimization problems](http://arxiv.org/abs/1412.6544) by Ian J. Goodfellow, Oriol Vinyals
    * [On Recurrent and Deep Neural Networks](http://www-etud.iro.umontreal.ca/~pascanur/papers/thesis.pdf) Phd thesis of Razvan Pascanu
  14. @debasishg debasishg revised this gist Feb 21, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion gistfile1.md
    Original file line number Diff line number Diff line change
    @@ -2,7 +2,7 @@
    * [Learning Feature Representations with K-means](http://www.cs.stanford.edu/~acoates/papers/coatesng_nntot2012.pdf) by Adam Coates and Andrew Y. Ng
    * [The devil is in the details: an evaluation of recent feature encoding methods](http://www.robots.ox.ac.uk/~vgg/publications/2011/Chatfield11/chatfield11.pdf) by Chatfield et. al.
    * [Emergence of Object-Selective Features in Unsupervised Feature Learning](http://web.stanford.edu/~acoates/papers/coateskarpathyng_nips2012.pdf) by Coates, Ng
    * [Scaling Learning Algorithms towards AI](http://yann.lecun.com/exdb/publis/pdf/bengio-lecun-07.pdf) Bnejio & LeCun
    * [Scaling Learning Algorithms towards AI](http://yann.lecun.com/exdb/publis/pdf/bengio-lecun-07.pdf) Benjio & LeCun

    2. *Deep Neural Nets*
    * [Dropout: A Simple Way to Prevent Neural Networks from Overfitting](http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf) by Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever and Ruslan Salakhutdinov
  15. @debasishg debasishg revised this gist Feb 21, 2015. 1 changed file with 1 addition and 0 deletions.
    1 change: 1 addition & 0 deletions gistfile1.md
    Original file line number Diff line number Diff line change
    @@ -2,6 +2,7 @@
    * [Learning Feature Representations with K-means](http://www.cs.stanford.edu/~acoates/papers/coatesng_nntot2012.pdf) by Adam Coates and Andrew Y. Ng
    * [The devil is in the details: an evaluation of recent feature encoding methods](http://www.robots.ox.ac.uk/~vgg/publications/2011/Chatfield11/chatfield11.pdf) by Chatfield et. al.
    * [Emergence of Object-Selective Features in Unsupervised Feature Learning](http://web.stanford.edu/~acoates/papers/coateskarpathyng_nips2012.pdf) by Coates, Ng
    * [Scaling Learning Algorithms towards AI](http://yann.lecun.com/exdb/publis/pdf/bengio-lecun-07.pdf) Bnejio & LeCun

    2. *Deep Neural Nets*
    * [Dropout: A Simple Way to Prevent Neural Networks from Overfitting](http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf) by Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever and Ruslan Salakhutdinov
  16. @debasishg debasishg revised this gist Feb 21, 2015. 1 changed file with 10 additions and 1 deletion.
    11 changes: 10 additions & 1 deletion gistfile1.md
    Original file line number Diff line number Diff line change
    @@ -1,4 +1,13 @@
    1. *Feature Learning*
    * [Learning Feature Representations with K-means](http://www.cs.stanford.edu/~acoates/papers/coatesng_nntot2012.pdf) by Adam Coates and Andrew Y. Ng
    * [The devil is in the details: an evaluation of recent feature encoding methods](http://www.robots.ox.ac.uk/~vgg/publications/2011/Chatfield11/chatfield11.pdf) by Chatfield et. al.
    * [Emergence of Object-Selective Features in Unsupervised Feature Learning](http://web.stanford.edu/~acoates/papers/coateskarpathyng_nips2012.pdf) by Coates, Ng
    * [Emergence of Object-Selective Features in Unsupervised Feature Learning](http://web.stanford.edu/~acoates/papers/coateskarpathyng_nips2012.pdf) by Coates, Ng

    2. *Deep Neural Nets*
    * [Dropout: A Simple Way to Prevent Neural Networks from Overfitting](http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf) by Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever and Ruslan Salakhutdinov
    * [Understanding the difficulty of training deep feedforward neural networks](http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf) by Xavier Glorot and Yoshua Bengio
    * [On the difficulty of training Recurrent Neural Networks](http://arxiv.org/pdf/1211.5063v2.pdf) by Razvan Pascanu, Tomas Mikolov and Yoshua Bengio
    * [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](http://arxiv.org/abs/1502.03167) by Sergey Ioffe and Christian Szegedy
    * [Deep Learning in Neural Networks: An Overview](http://arxiv.org/pdf/1404.7828v4.pdf) by Jurgen Schmidhuber
    * [Stochastic Gradient Descent Tricks](http://research.microsoft.com/pubs/192769/tricks-2012.pdf) by L´eon Bottou
    * [Qualitatively characterizing neural network optimization problems](http://arxiv.org/abs/1412.6544) by Ian J. Goodfellow, Oriol Vinyals
  17. @debasishg debasishg revised this gist Feb 21, 2015. 2 changed files with 4 additions and 5 deletions.
    4 changes: 4 additions & 0 deletions gistfile1.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,4 @@
    1. *Feature Learning*
    * [Learning Feature Representations with K-means](http://www.cs.stanford.edu/~acoates/papers/coatesng_nntot2012.pdf) by Adam Coates and Andrew Y. Ng
    * [The devil is in the details: an evaluation of recent feature encoding methods](http://www.robots.ox.ac.uk/~vgg/publications/2011/Chatfield11/chatfield11.pdf) by Chatfield et. al.
    * [Emergence of Object-Selective Features in Unsupervised Feature Learning](http://web.stanford.edu/~acoates/papers/coateskarpathyng_nips2012.pdf) by Coates, Ng
    5 changes: 0 additions & 5 deletions gistfile1.txt
    Original file line number Diff line number Diff line change
    @@ -1,5 +0,0 @@
    Feature Learning

    1. Learning Feature Representations with K-means - Adam Coates and Andrew Y. Ng http://www.cs.stanford.edu/~acoates/papers/coatesng_nntot2012.pdf
    2. The devil is in the details: an evaluation of recent feature encoding methods - Chatfield et. al. http://www.robots.ox.ac.uk/~vgg/publications/2011/Chatfield11/chatfield11.pdf
    3. Emergence of Object-Selective Features in Unsupervised Feature Learning - Coates, Ng http://web.stanford.edu/~acoates/papers/coateskarpathyng_nips2012.pdf
  18. @debasishg debasishg revised this gist Feb 21, 2015. 1 changed file with 2 additions and 0 deletions.
    2 changes: 2 additions & 0 deletions gistfile1.txt
    Original file line number Diff line number Diff line change
    @@ -1,3 +1,5 @@
    Feature Learning

    1. Learning Feature Representations with K-means - Adam Coates and Andrew Y. Ng http://www.cs.stanford.edu/~acoates/papers/coatesng_nntot2012.pdf
    2. The devil is in the details: an evaluation of recent feature encoding methods - Chatfield et. al. http://www.robots.ox.ac.uk/~vgg/publications/2011/Chatfield11/chatfield11.pdf
    3. Emergence of Object-Selective Features in Unsupervised Feature Learning - Coates, Ng http://web.stanford.edu/~acoates/papers/coateskarpathyng_nips2012.pdf
  19. @debasishg debasishg created this gist Sep 25, 2014.
    3 changes: 3 additions & 0 deletions gistfile1.txt
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,3 @@
    1. Learning Feature Representations with K-means - Adam Coates and Andrew Y. Ng http://www.cs.stanford.edu/~acoates/papers/coatesng_nntot2012.pdf
    2. The devil is in the details: an evaluation of recent feature encoding methods - Chatfield et. al. http://www.robots.ox.ac.uk/~vgg/publications/2011/Chatfield11/chatfield11.pdf
    3. Emergence of Object-Selective Features in Unsupervised Feature Learning - Coates, Ng http://web.stanford.edu/~acoates/papers/coateskarpathyng_nips2012.pdf