Skip to content

BYOL (Bootstrap Your Own Latent), implemented from scratch in Pytorch

Notifications You must be signed in to change notification settings

deepmancer/byol-pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 

Repository files navigation

📈 BYOL (Bootstrap Your Own Latent) – From Scratch Implementation in Pytorch

PyTorch Python Jupyter Notebook

Welcome to the BYOL (Bootstrap Your Own Latent) repository! Dive into our comprehensive, from-scratch implementation of BYOL—a self-supervised contrastive learning algorithm that's transforming how we approach unsupervised feature learning.


🔧 Requirements

Before running the code, ensure you have the following dependencies:

  • Python 3: The language used for implementation.
  • PyTorch: The deep learning framework powering our model training and evaluation.

🧠 Model Overview

BYOL represents a breakthrough in self-supervised learning. Unlike traditional methods that rely on negative samples for contrastive learning, BYOL focuses solely on positive pairs—images of the same instance with different augmentations. This unique approach simplifies training and minimizes computational requirements, achieving remarkable results.

Key Features:

  • No Negative Samples Required: Efficient training by focusing exclusively on positive pairs.
  • State-of-the-Art Results: Achieves impressive performance on various image classification benchmarks.

BYOL Model

BYOL Model Overview

📁 Dataset

STL10 Dataset

We use the STL10 dataset to evaluate our BYOL implementation. This dataset is tailored for developing and testing unsupervised feature learning and self-supervised learning models.

  • Overview: Contains 10 classes, each with 500 training images and 800 test images.
  • Source: STL10 Dataset

STL10 Dataset Example

STL10 Dataset Sample

📊 Results

Our experiments highlight the impact of BYOL pretraining on the STL10 dataset:

  • Without Pretraining: Baseline accuracy of 84.58%.
  • With BYOL Pretraining: Accuracy improved to 87.61% after 10 epochs, demonstrating BYOL’s effectiveness.

Implementation Insights

This repository features a complete, from-scratch implementation of BYOL. For our experiments, we used a ResNet18 model pretrained on ImageNet as the encoder, showcasing how leveraging pretrained models can further enhance BYOL’s capabilities.

About

BYOL (Bootstrap Your Own Latent), implemented from scratch in Pytorch

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published