Skip to content

Assignments and Presentations for Security and Privacy in Machine Learning 2024

Notifications You must be signed in to change notification settings

RamtinMoslemi/SPML2024

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

59 Commits
 
 
 
 
 
 
 
 

Repository files navigation

SPML2024

Assignments and Presentations for Security and Privacy in Machine Learning 2024

Course Info

The public page of the course is here. The first part of the course was focused on Adversarial Examples (AE) and the second part focused on a number of topics including Data Poisoning, Model Extraction (ME), Differential Privacy (DP), and the Security of Large Language Models (LLM).

Readings

The following papers were covered in, and were a part of the course.

Topic Paper
Introduction Towards the Science of Security and Privacy in Machine Learning
AE Generating Methods Intriguing Properties of Neural Networks
Explaining and Harnessing Adversarial Examples
Towards Evaluating the Robustness of Neural Networks
Universal Adversarial Perturbations
Adversarial Patch
Defenses Against AEs Towards Deep Learning Models Resistant to Adversarial Attacks
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Certified Adversarial Robustness via Randomized Smoothing
Provably robust deep learning via adversarially trained smoothed classifiers
Black-Box AEs Practical Black-Box Attacks against Machine Learning
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Black-box Adversarial Attacks with Limited Queries and Information
Poisoning BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Clean-Label Backdoor Attacks
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Deep Partition Aggregation: Provable Defense against General Poisoning Attacks
Model Extraction High Accuracy and High Fidelity Extraction of Neural Networks
Knockoff Nets: Stealing Functionality of Black-Box Models
Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
Privacy Membership Inference Attacks against Machine Learning Models
Passive and Active White-box Inference Attacks against Centralized and Federated Learning
The Algorithmic Foundations of Differential Privacy
Deep Learning with Differential Privacy
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
LLM Security Universal and Transferable Adversarial Attacks on Aligned Language Models

Additional Papers

The papers that were were covered in the:

  • first series of presentations, which focus on Adversarial Examples, can be found here.
  • second series of presentations, which focus on Model Extraction, Privacy and LLM Security, can be found here.
  • homework assignments, which focus on Black-Box AEs, DP and LLM Security, can be found here.

Other Resources

The following supplementary resouces can help you learn more or fill your knwoledge gaps:

About

Assignments and Presentations for Security and Privacy in Machine Learning 2024

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published