Last active
August 29, 2015 13:57
-
-
Save IssamLaradji/9660324 to your computer and use it in GitHub Desktop.
Revisions
-
IssamLaradji revised this gist
Mar 20, 2014 . 1 changed file with 1 addition and 1 deletion.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -47,7 +47,7 @@ Below I will explain the contributions in detail, **Extreme Learning Machines (ELM):** ELM is a powerful predictor with high generalization performance. It solves the prediction objective function through least-square solutions; so, it trains very quickly. It comes in the form of a single hidden-layer feedforward network which randomly generates the input weights and then solves for the hidden weights. The implementation will be based on the work of Huang et al. [1]. **Sequential ELM:** One drawback of ELM is the need to process the whole matrix representing the dataset at once, causing problems for computers with small memory. Sequential ELM would counteract this by training on a dataset in relatively small batches. Sequential ELM would accurately update its hidden weights as new batches arrive, for it relies on a recursive dynamic programming scheme. The implementation will be based on the work of Huang et al. [2]. **Regularized ELM:** In ELM, least-square solutions for the unknown hidden weights are over-determined, as the number of samples far outnumber the number of unknown variables. Therefore, the algorithm could overfit on the training set. Regularized ELM counteracts this overfitting problem by constraining the solutions to be small (This is similar to the regularizer term in Multi-layer Perceptron). The implementation will be based on the work of Huang et al. [3]. -
IssamLaradji revised this gist
Mar 20, 2014 . 1 changed file with 6 additions and 7 deletions.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -1,6 +1,5 @@ GSoC 2014: Extending Neural Networks Module for Scikit learn ------------------------------------------------------------ **Name:** Issam H. Laradji @@ -16,7 +15,7 @@ GSoC 2014: Extending Neural Networks Module for Scikit learn **GSoC Blog RSS feed:** University Information ---------------------- **University:** King Fahd University of Petroleum & Minerals @@ -29,12 +28,12 @@ University Information **Degree:** Master of Science Degree Proposal Title: --------------- Extending Neural Networks Module for Scikit-learn Proposal Abstract: ------------------ The project has three main objectives, @@ -85,7 +84,7 @@ Tentative Timeline Past Work: ---------- 1. https://github.com/scikit-learn/scikit-learn/pull/2120 - (not merged) - Multi-layer Perceptron. About to be merged; it is waiting for few changes and a final review. @@ -95,7 +94,7 @@ Past Work: References ---------- 1. http://www.di.unito.it/~cancelli/retineu11_12/ELM-NC-2006.pdf 2. http://www.ntu.edu.sg/home/egbhuang/pdf/OS-ELM-TNN.pdf -
IssamLaradji revised this gist
Mar 20, 2014 . 1 changed file with 14 additions and 7 deletions.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -1,4 +1,4 @@ GSoC 2014: Extending Neural Networks Module for Scikit learn ============================================================ @@ -15,7 +15,8 @@ GSoC 2014: Extending Neural Networks Module for Scikit learn **GSoC Blog RSS feed:** University Information ====================== **University:** King Fahd University of Petroleum & Minerals @@ -27,11 +28,13 @@ GSoC 2014: Extending Neural Networks Module for Scikit learn **Degree:** Master of Science Degree Proposal Title: =============== Extending Neural Networks Module for Scikit-learn Proposal Abstract: ================== The project has three main objectives, @@ -53,7 +56,8 @@ Below I will explain the contributions in detail, **Greedy Layer-Wise Training of Deep Networks:** This would allow the Multi-layer Perceptron module to support more than one hidden layer. In addition, it would use the scikit-learn pipelining functions to support initializing weights after applying either Sparse Auto-encoders or Restricted Boltzmann Machines. My main reference for the implementation is the UFLDL tutorial [6]. Tentative Timeline ================== **Week 1, 2 (May 19 - May 25)** @@ -80,7 +84,8 @@ Below I will explain the contributions in detail, **Week 13** - Wrap-up Past Work: ========== 1. https://github.com/scikit-learn/scikit-learn/pull/2120 - (not merged) - Multi-layer Perceptron. About to be merged; it is waiting for few changes and a final review. @@ -89,7 +94,9 @@ Below I will explain the contributions in detail, 3. https://github.com/scikit-learn/scikit-learn/pull/2680 - (not merged) - Gaussian Restricted Boltzmann Machines. References ========== 1. http://www.di.unito.it/~cancelli/retineu11_12/ELM-NC-2006.pdf 2. http://www.ntu.edu.sg/home/egbhuang/pdf/OS-ELM-TNN.pdf 3. http://www.ntu.edu.sg/home/egbhuang/pdf/ELM-Unified-Learning.pdf -
IssamLaradji revised this gist
Mar 20, 2014 . 1 changed file with 4 additions and 1 deletion.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -1,4 +1,7 @@ ============================================================ GSoC 2014: Extending Neural Networks Module for Scikit learn ============================================================ **Name:** Issam H. Laradji -
IssamLaradji renamed this gist
Mar 20, 2014 . 1 changed file with 2 additions and 0 deletions.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -1,3 +1,5 @@ #GSoC 2014: Extending Neural Networks Module for Scikit learn **Name:** Issam H. Laradji **Email:** [email protected] -
IssamLaradji revised this gist
Mar 20, 2014 . No changes.There are no files selected for viewing
-
IssamLaradji revised this gist
Mar 20, 2014 . No changes.There are no files selected for viewing
-
IssamLaradji created this gist
Mar 20, 2014 .There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -0,0 +1,93 @@ **Name:** Issam H. Laradji **Email:** [email protected] **Github:** IssamLaradji **Time zone:** UTC+03:00 **Blog:** http://easymachinelearning.blogspot.com/ **GSoC Blog RSS feed:** ## University Information **University:** King Fahd University of Petroleum & Minerals **Major:** Computer Science **Current Year:** Fourth semester, Master's degree **Expected Graduation Date:** May 15, 2014 **Degree:** Master of Science Degree ## Proposal Title: Extending Neural Networks Module for Scikit-learn ## Proposal Abstract: The project has three main objectives, 1) To implement Extreme Learning Machines (ELM), Sequential ELM, and regularized ELM 2) To implement Sparse Auto-encoders 3) To extend Multi-layer Perceptron for supporting more than one hidden layer Below I will explain the contributions in detail, **Extreme Learning Machines (ELM):** ELM is a powerful predictor with high generalization performance. It solves the prediction objective function through least-square solutions; so, it trains very quickly. It comes in the form of a single hidden-layer feedforward network which randomly generates the input weights and then solves for the hidden weights. The implementation will be based on the work of Huang et al. [1]. **Sequential ELM:** One drawback of ELM is the need to process the whole matrix representing the dataset at once, causing problems for computers with small memory. Sequential ELM would counteract this by training on a dataset in relatively small batches. Sequential ELM would accurately update its hidden weights as it encounters new batches, for it relies on a recursive dynamic programming scheme. The implementation will be based on the work of Huang et al. [2]. **Regularized ELM:** In ELM, least-square solutions for the unknown hidden weights are over-determined, as the number of samples far outnumber the number of unknown variables. Therefore, the algorithm could overfit on the training set. Regularized ELM counteracts this overfitting problem by constraining the solutions to be small (This is similar to the regularizer term in Multi-layer Perceptron). The implementation will be based on the work of Huang et al. [3]. **Sparse Auto-encoders (SAE):** Feature extraction has long been in the spotlight of machine learning research. Commonly used for image recognition, SAE would extract suitable, interesting structural information from the input samples. It learns to reconstruct the input samples while constraining its extracted hidden features to be sparse. SAE has the objective function of a standard Multi-layer Perceptron and an additional term, the Kullback–Leibler divergence [4], that imposes the sparsity constraint. Besides extracting new features, it can also provide good initial weights for networks training with backpropagation. The implementation will be based on Andrew Ng. notes [5]. **Greedy Layer-Wise Training of Deep Networks:** This would allow the Multi-layer Perceptron module to support more than one hidden layer. In addition, it would use the scikit-learn pipelining functions to support initializing weights after applying either Sparse Auto-encoders or Restricted Boltzmann Machines. My main reference for the implementation is the UFLDL tutorial [6]. ## Tentative Timeline **Week 1, 2 (May 19 - May 25)** **Goal**: Implement and revise Extreme Learning Machines. **Week 3, 4, (May 26 - June 8)** **Goal**: Implement and revise Sequential Extreme Learning Machines. **Week 5, 6 (June 9 - June 29)** **Goal**: Implement and revise Regularized Extreme Learning Machines. **Note** - Pass midterm evaluation on June 27 **Week 7, 8, 9 (June 30 - July 20)** **Goal**: Implement and revise Sparse Auto-encoders. **Week 10, 11, 12 (July 21- August 11)** **Goal**: Implement and revise Greedy Layer-Wise Training of Deep Networks. **Week 13** - Wrap-up ##Past Work: 1. https://github.com/scikit-learn/scikit-learn/pull/2120 - (not merged) - Multi-layer Perceptron. About to be merged; it is waiting for few changes and a final review. 2. https://github.com/scikit-learn/scikit-learn/pull/2099 - (not merged) - Sparse Auto-encoders. 3. https://github.com/scikit-learn/scikit-learn/pull/2680 - (not merged) - Gaussian Restricted Boltzmann Machines. ## References 1. http://www.di.unito.it/~cancelli/retineu11_12/ELM-NC-2006.pdf 2. http://www.ntu.edu.sg/home/egbhuang/pdf/OS-ELM-TNN.pdf 3. http://www.ntu.edu.sg/home/egbhuang/pdf/ELM-Unified-Learning.pdf 4. http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence 5. http://www.stanford.edu/class/cs294a/sparseAutoencoder.pdf 6. http://ufldl.stanford.edu/wiki/index.php/Exercise:_Implement_deep_networks_for_digit_classification