“Artificial” neural networks are inspired by the organic brain, translated to the computer. It’s not
a perfect comparison, but there are neurons, activations, and lots of interconnectivity, even if the
underlying processes are quite different
A single neuron by itself is relatively useless, but, when combined with hundreds or thousands
(or many more) of other neurons, the interconnectivity produces relationships and results that
frequently outperform any other machine learning methods
Even though various libraries such as Pytorch, Tensorflow are available one cannot understand how neural networks actually work at core level by using the libraries.
Understanding how things work at core level helps one to tune in fine hyperparameters and even solve various errors
- Linear Activation
- ReLU Activation
- Sigmoid Activation
- Softmax Activation
- Binary Cross Entropy Loss
- Categorical Cross Entropy Loss
- Mean Absolute Error Loss
- Mean Squared Loss
3.Optimizers:
- Stochastic Gradient Descent Optimizer (SGD)
- Adagrad Optimizer
- Adam Optimizer
- Root Mean Squared Propagation Optimizer (RMSprop)
The complete model is tested on Fashion MNIST Dataset
After spending some time and finding the best hyperparameters, an accuracy of 90% was achieved.
⭐ Please Star and share the repository. Thanks! ❤️