私の備忘録がないわね...私の...

画像処理とかプログラミングのお話。

ICML18~20のadversarial examples関連論文リンク集

目視で判断したので、間違っていたり抜けてたりするかもしれませんが、ご容赦ください。

20

[1909.13806] Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML

[2004.13617] Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks

[2003.03778] Adversarial Attacks on Probabilistic Autoregressive Forecasting Models

[2002.11569] Overfitting in adversarially robust deep learning

[2002.11798] Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization

[2002.04694] Adversarial Robustness for Code

[2008.02883] Stronger and Faster Wasserstein Adversarial Attacks

Attacks Which Do Not Kill Training Make Adversarial Learning Stronger

Adversarial Risk via Optimal Transport and Optimal Couplings

[2002.04599] Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations

[2006.14748] Proper Network Interpretability Helps Adversarial Robustness in Classification

[2006.16520] Black-box Certification and Learning under Adversarial Perturbations

[1909.04068] Adversarial Robustness Against the Union of Multiple Perturbation Models

[2007.11826] Hierarchical Verification for Adversarial Robustness

[2003.01690] Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks

[2006.16384] Sharp Statistical Guarantees for Adversarially Robust Gaussian Classification

[1906.07153] Adversarial attacks on Copyright Detection Systems

[2011.07478] Towards Understanding the Regularization of Adversarial Robustness on Neural Networks

[1810.06583] Concise Explanations of Neural Networks using Adversarial Training

[2002.11821] Improving Robustness of Deep-Learning-Based Image Reconstruction

[2002.04725] More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models

[2003.10602] Defense Through Diverse Directions

[2002.11565] Randomization matters. How to defend against strong adversarial attacks

[1907.02044] Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack

19

First-Order Adversarial Vulnerability of Neural Networks and Input Dimension

[1809.01093] Adversarial Attacks on Node Embeddings via Graph Poisoning

[1907.13220] Multi-Agent Adversarial Inverse Reinforcement Learning

[1901.08846] Improving Adversarial Robustness via Promoting Ensemble Diversity

[1903.06603] On Certifying Non-uniform Bound against Adversarial Attacks

[1904.00759] Adversarial camera stickers: A physical camera-based attack on deep learning systems

[1805.10204] Adversarial examples from computational constraints

[1905.07387] POPQORN: Quantifying Robustness of Recurrent Neural Networks

[1810.04065] Generalized No Free Lunch Theorem for Adversarial Robustness

[1902.10660] Robust Decision Trees Against Adversarial Examples

[1802.06552] Are Generative Classifiers More Robust to Adversarial Attacks?

[1902.02918] Certified Adversarial Robustness via Randomized Smoothing

[1903.10346] Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition

[1905.06635] Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization

[1902.07906] Wasserstein Adversarial Examples via Projected Sinkhorn Iterations

[1905.05897] Transferable Clean-Label Poisoning Attacks on Deep Neural Nets

[1905.00441] NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks

[1905.07121] Simple Black-box Adversarial Attacks

[1905.06494] Data Poisoning Attacks on Stochastic Bandits

Data Poisoning Attacks in Multi-Party Learning

Transferable Adversarial Training:A General Approach to Adapting Deep Classifiers

[1901.10513] Adversarial Examples Are a Natural Consequence of Test Error in Noise

[1905.09797] Interpreting Adversarially Trained Convolutional Neural Networks

[1806.02977] Monge blunts Bayes: Hardness Results for Adversarial Training

[1811.00007] Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness

[1905.04172] On the Connection Between Adversarial Robustness and Saliency Map Interpretability

[1906.11897] On Physical Adversarial Patches for Object Detection

18

Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope

Synthesizing Robust Adversarial Examples

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

Selecting Representative Examples for Program Synthesis

Adversarial Risk and the Dangers of Evaluating Against Weak Attacks

Black-box Adversarial Attacks with Limited Queries and Information

Analyzing the Robustness of Nearest Neighbors to Adversarial Examples