Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add more model extraction attacks #68

Open
suhacker1 opened this issue May 6, 2021 · 0 comments
Open

Add more model extraction attacks #68

suhacker1 opened this issue May 6, 2021 · 0 comments
Labels
extraction Related to model extraction attacks good first issue Good for newcomers help wanted Extra attention is needed user-facing Features that will directly impact users

Comments

@suhacker1
Copy link
Collaborator

suhacker1 commented May 6, 2021

Is your feature request related to a problem? Please describe.
We want every model extraction attack to be achievable in PrivacyRaven. This does not include side channel, white-box, full or partial prediction, or explanation-based attacks.

Describe the solution you'd like.
PrivacyRaven has three interfaces for attacks:

  1. The core interface defines each attack parameter individually.
  2. The specific interface runs a predefined attack configuration.
  3. The cohesive interface runs every possible attack.

A user should be able to run the attack in every interface; this means that all the building blocks for the attack should be contained within PrivacyRaven. For example, new synthesizers or subset selection strategies for a specific attack should be added, so that it can be applied using the core interface.

If you would like to implement an attack, comment with the name of the paper. Then, create a new issue referencing this issue with the name of the paper in the title.

Detail any additional context.
This is a list of papers describing model extraction attacks that should be added to PrivacyRaven.

  1. Knockoff nets: Stealing functionality of black-box models: Blocked on Add retraining and subset sampling to extraction #10
  2. PRADA: protecting against DNN model stealing attacks: Missing synthesizer
  3. CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples: Missing some synthesizers
  4. ACTIVETHIEF: Model Extraction Using Active Learning and Unannotated Public Data: Blocked on Add retraining and subset sampling to extraction #10
  5. Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack
  6. Special-Purpose Model Extraction Attacks: Stealing Coarse Model with Fewer Queries
  7. Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization
  8. ES Attack: Model Stealing against Deep Neural Networks without Data Hurdles
  9. Simulating Unknown Target Models for Query-Efficient Black-box Attacks
  10. Thieves on Sesame Street! Model Extraction of BERT-based APIs
@suhacker1 suhacker1 added good first issue Good for newcomers help wanted Extra attention is needed extraction Related to model extraction attacks user-facing Features that will directly impact users labels May 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
extraction Related to model extraction attacks good first issue Good for newcomers help wanted Extra attention is needed user-facing Features that will directly impact users
Projects
None yet
Development

No branches or pull requests

1 participant