Skip to content

ASU-APG/awesome_attribution_of_generative_models

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 

Repository files navigation

Awesome Attribution of Generative Models Awesome

A curated list of resources on attribution of generative models, inspired by awesome-implicit-representations .

Work-in-progress.

This list is intended to introduce the key concepts of attribution of generative models. We hope that this reading list would be great introduction for whom want to start study this topic.

We include short summary of paper if necessary.

Note that this list does not include fake detection or fake localization.

What are attribution of generative models?

Attribution of generative models is a way to trace the source of generated media. Existing research mostly focuses on fake detection or fake localization. However, attribution of generative models is dealing with not only detection but also classification that maps generated media to the responsible generative model or the owner. To achieve stable attribution, the attribution signal should be robust against post-processes (e.g. blur, crop, JPEG compression, etc.). Also, this signal needs to be imperceptible to achieve the original goal of the generative model. Therefore, attribution methods are evaluated using attribution accuracy, robustness, and imperceptibility.

What are the threat cases and why attribution is possible solution?

Attribution of Generative Models is motivated by two types of threat models related to the generative models:

  1. Malicious Generation
    The attackers can create and disseminate generated media for illegal goals (e.g. fake profiles). The attribution can help a law enforcement agency to point out suspects.
  2. Digital Copyright Infringement
    The attackers are the people who download copyrighted digital content (e.g. an art piece created through the assistance of a generative model), makes modification to it, and claims it as their own. In this case, attribution allows the content owner to protect their intellectual property.

Papers

Model Attribution

The following list of papers showed that the source model of fake media is attributable.

User Attribution

The following list showed that the users are attributable even if they create fake media using the same structured generative model.

Sufficient Condition Studies

Empirical Studies

License

License: MIT

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published