Skip to content

sorobedio/Myreading

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 

Repository files navigation

Network representations learning

Hyper-Representations as Generative Models: Sampling Unseen Neural Network Weights https://arxiv.org/abs/2209.14733

Learning to Learn with Generative Models of Neural Network Checkpoints https://arxiv.org/abs/2209.12892

Permutation Equivariant Neural Functionals https://arxiv.org/pdf/2302.14040.pdf

NERN - LEARNING NEURAL REPRESENTATIONS FOR NEURAL NETWORKS https://arxiv.org/pdf/2212.13554.pdf

Equivariant Tensor Networks https://arxiv.org/pdf/2304.08226.pdf

Learning useful representations for shifting tasks and distributions https://arxiv.org/abs/2212.07346

Domain generalization

HMOE: Hypernetwork-based Mixture of Experts for Domain Generalization https://arxiv.org/abs/2211.08253

Causality Inspired Representation Learning for Domain Generalization https://arxiv.org/abs/2203.14237

Diverse Weight Averaging for Out-of-Distribution Generalization https://arxiv.org/abs/2205.09739

Unsupervised Domain Expansion for Visual Categorization https://arxiv.org/abs/2104.00233

Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time https://arxiv.org/abs/2203.05482

Frequency Decomposition to Tap the Potential of Single Domain for Generalization https://arxiv.org/abs/2304.07261

Semantic-Aware Mixup for Domain Generalization https://arxiv.org/abs/2304.05675

Zoo-Tuning: Adaptive Transfer from a Zoo of Models https://arxiv.org/abs/2106.15434

Fine-tuning and model distillation

Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution https://arxiv.org/abs/2202.10054

Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time https://arxiv.org/abs/2203.05482

Neural architecture search

Can GPT-4 Perform Neural Architecture Search? https://arxiv.org/abs/2304.10970

Visual prompting

BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning https://arxiv.org/abs/2303.14773

Others

Neural Discrete Representation Learning https://arxiv.org/abs/1711.00937

TAMING TRANSFORMERS FOR HIGH-RESOLUTION IMAGE SYNTHESIS (A.K.A #VQGAN) https://compvis.github.io/taming-transformers/

Denoising Diffusion Probabilistic Models https://arxiv.org/abs/2006.11239

Denoising Diffusion Implicit Models https://arxiv.org/abs/2010.02502

Denoising Diffusion Implicit Models https://arxiv.org/abs/2010.02502

High-Resolution Image Synthesis with Latent Diffusion Models https://arxiv.org/abs/2112.10752

Exploiting Redundancy: Separable Group Convolutional Networks on Lie Groups(2022) https://arxiv.org/abs/2110.13059

Equivariance Through Parameter-Sharing(2017) https://arxiv.org/abs/1702.08389

Group Equivariant Convolutional Networks (2016) https://arxiv.org/abs/1602.07576

General E(2)-Equivariant Steerable CNNs(2021) https://arxiv.org/abs/1911.08251

A Program to Build E(N)-Equivariant Steerable CNNs (2023) https://openreview.net/forum?id=WE4qe9xlnQw

https://github.com/QUVA-Lab/escnn

About

Papers to read list

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published