Skip to content

Instantly share code, notes, and snippets.

@ravithejaburugu
Forked from ChessPoker/networkConfig.java
Created January 21, 2017 19:12
Show Gist options
  • Save ravithejaburugu/cdb8a32e66c3443ca4bf80d1ab06f99a to your computer and use it in GitHub Desktop.
Save ravithejaburugu/cdb8a32e66c3443ca4bf80d1ab06f99a to your computer and use it in GitHub Desktop.
Network Config
MultiLayerNetwork net;
//two hidden layers of 3 neurons each
final int[] LSTMLayers = new int[]{3,3};
NeuralNetConfiguration.ListBuilder list = new NeuralNetConfiguration.Builder()
.optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT).iterations(1)
.learningRate(learningRate)
.regularization(true).l2(0.0000001)
.seed(76692)
.weightInit(WeightInit.XAVIER)
.updater(Updater.ADAM).adamMeanDecay(0.99).adamVarDecay(0.9999)
.list();
int layerIdx = 0;
for (; layerIdx < LSTMLayers.length; layerIdx++) {
list = list.layer(layerIdx, new GravesLSTM.Builder().nIn(nIn).nOut(LSTMLayers[layerIdx])
.activation("softsign").build());
nIn = LSTMLayers[layerIdx];
}
list.layer(layerIdx++, new RnnOutputLayer.Builder(LossFunctions.LossFunction.MCXENT).activation("softmax")
.nIn(nIn).nOut(nOut).build());
MultiLayerConfiguration conf = list.pretrain(false).backprop(true).build();
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment