Adversarial Distributional Training for Robust Deep LearningZhijie Deng,...
CAT: Customized Adversarial Training for Improved RobustnessMinhao Cheng...
ClustTR: Clustering Training for RobustnessMotasem Alfarra, Juan C. Pére...
對(duì)于剛接觸對(duì)抗樣本領(lǐng)域的小伙伴們來(lái)說(shuō),看到領(lǐng)域內(nèi)眾多文章時(shí),簡(jiǎn)直眼花繚亂。這時(shí)候,如果一篇好的綜述概括了當(dāng)前領(lǐng)域內(nèi)的主要進(jìn)展,提供給我們?cè)擃I(lǐng)域的...
題目:DeepFool: a simple and accurate method to fool deep neural networks地址...
題目:Towards Evaluating the Robustness of Neural Networks地址:https://arxiv....
論文題目:One pixel attack for fooling deep neural networks論文地址:https://arxiv...
自從2014年Szegedy等人提出對(duì)抗樣本以來(lái),不斷有研究者提出新的對(duì)抗攻擊方法。本文匯總了當(dāng)前已有的絕大多數(shù)算法,以拋磚引玉用,并不斷更新。...
論文題目:The Limitations of Deep Learning in Adversarial Settings論文地址:https:...