-
I am trying to get a simple Autoencoder to work with the MNIST data (see #601). DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss())
.optDevices(trainSetUp.devices)
.addTrainingListeners(TrainingListener.Defaults.logging(outputDir):_*)
.addTrainingListeners(listener)
.optOptimizer( Adam.builder().build ) training proceeds with DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss()) and DefaultTrainingConfig(Loss.l2Loss()) In both cases I get the error:
I note that
I was expecting the trainer.initialize(trainSetUp.inputShape) How are those loss functions different? What must be changed or set in these cases? How should one go about diagnosing this? For reference I show below the NN used in this case. TIA def net1(): SequentialBlock =
val net = SequentialBlock()
net.add(Blocks.batchFlattenBlock(784))
net.add(Linear.builder().setUnits(32).build())
//net.add(Activation::relu)
net.add(relu)
net.add(Linear.builder().setUnits(784).build())
net.add(sigmoid)
net.setInitializer( NormalInitializer(0.01f) )
net |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Finally figured it out. I was using the incorrect data set. I was using the dataset that uses a vector of integers as a label (classes). Apologies for the noise. |
Beta Was this translation helpful? Give feedback.
Finally figured it out. I was using the incorrect data set. I was using the dataset that uses a vector of integers as a label (classes).
Apologies for the noise.