Skip to content

Samuel-Bachorik/machine_learning_library

Repository files navigation

Fully automatic backpropagation algorithm implemeted in numpy trough various linear layers, activation functions and loss function

  • What are derivatives of weights with respect to loss function? How to update model's weights to get better predictions?

  • Backpropagation alorithm implemented in this library will give you answer to this questions.

  • Build your model, put your data inside, run backward, save model weights, load model weights and test trained model in real life

Tests on hand written digits - MNIST

  • Library tests were performed on the MNIST dataset, which contains 60,000 training images and 10,000 test images.

The course of the crossentropy loss function during training

This graph represents model loss optimized with Adam optimizer, after 50 epochs with Adam we are getting average loss for epoch about 0.0003

MNIST Crossentropy loss ADAM
Next graph represents model loss optimized with basic SGD optimizer, with SGD we can not get less than 0.3859

MNIST Crossentropy loss SGD

Measuring accuracy of trained models

After sucessfull training, the accuracy of the model was tested on 10,000 test images that the model had never seen before.

  • Model acuraccy optimized with Adamoptimizer is 99.79 % and model missed only 21 of 10 000 images
  • Model acuraccy optimized with SGD optimizer is 92.31 % and model missed 769 of 10 000 images

Theory

  • Example of forward and backward on three layer computational graph
  • MSE loss at the end of model
  • Chart and equations are made with lucidchart

image Chart made with lucidchart!

  • Next image is forward and backward computation corresponding to previous computational graph
  • SGD optimalization of two weights

image Chart made with lucidchart!

Training process explained

training loop Chart made with lucidchart!

Linear layer

  • Linear layer applies a linear transformation to the incoming data x
  • x, W and B are tensors
  • T = transposed matrix

image

Stochastic gradient descent

Intiution behind optimizer Adam- https://www.geeksforgeeks.org/intuition-of-adam-optimizer/