This repository is a curated list of papers and open source code about competition for CV Adversarial Attack.
- Adversarial Examples in Modern Machine Learning: A Review [Paper]
- Adversarial Attacks and Defenses in Images, Graphs and Text: A Review [Paper]
- Adversarial Atacks and Defences: A Survey [Paper]
- Adversarial Examples: Opportunities and Challenges [Paper]
- Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [Paper]
- Adversarial Examples: Attacks and Defenses for Deep Learning [Paper]
- Adversarial Attacks and Defences Competition [Paper]
-
NIPS 2017: Targeted Adversarial Attack
dongyp13 : 1th place [code] [Boosting Adversarial Attacks with Momentum]
rwightman : 5th place[code]
rwightman : pytorch-nips2017-attack-example [code]
anlthms [code]
erko [code]
Stanford&Suns [code]
ysharma1126 [code]
EdwardTyantov [code]
ckomaki [code] -
NIPS 2017: Non-Targeted Adversarial Attack
dongyp13 : 1th place [code]
toshi-k : 5th place [code] -
NIPS 2017: DefenseAgainst Adversarial Attack
cihangxie : 2th place [code]
lfz [code]
Roy-YL [code] -
CAAD2018
ysharma1126 [Tech Report]
FAIR & JHU team [Tech Report]
TSAIL team [Attack Tech Report]
TSAIL team [Defense Tech Report]
DLight team [Tech Report]
NorthWest Sec team [Tech Report]
Teaflow team [Tech Report]
RNG team [Tech Report]
Kunlin team [Tech Report]
Official source code [code]
jxjrework [code]
tianweiy [code]
geekpwn [code]
0three [code] -
MCS 2018. Adversarial Attacks on Black Box Face Recognition
snakers4 [code] [Tech Report] -
IJCAI-19: 阿里巴巴人工智能对抗算法竞赛
Jowekk : 2th place in the defense track [code]
jiangyangzhou : 5th place in the non-targted attack track [code]
cshanjiewu : 6th place in the targted attack track [code]
yfreedomliTHU [code] -
天池:安全AI挑战者计划第一期-人脸识别对抗
BruceQFWang : 4th place [code]
tabsun : 10th place [code]
InoriJam : 11th place [code]
SunnyWangGitHub [code]
- cleverhans [Link]
- foolbox [Link]
- adversarial-robustness-toolbox (ART) [Link]
- Adversarial-Examples-Reading-List [Link]
- adversarial-example-pytorch [Link]
- advertorch [Link]
- EvadeML-Zoo [Link]
- U-Turn [Link]
- 知乎:如何看待机器视觉的“对抗样本”问题,其原理是什么?
- 对抗样本
- NIPS 2017对抗样本攻防竞赛总结 [Link]
- 产生和防御对抗样本的新方法|分享总结 [Link]
- 清华参赛队攻击组论文 [Boosting Adversarial Attacks with Momentum]
- 清华参赛队防御组论文 [Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser]
- 动量迭代攻击和高层引导去噪:对抗样本攻防的新方法 [Video]
- AI安全大佬教你如何攻击云端图像分类模型|纯干货 [Link]
- IJCV2022 | 逆转特征让re-id模型从88.54%到0.15% [Link]