私の備忘録がないわね...私の...

画像処理とかプログラミングのお話。

NeurIPS16~19のadversarial examples関連論文リンク集

目視で判断したので、間違っていたり抜けてたりするかもしれませんが、ご容赦ください。

19

[1910.07629] A New Defense Against Adversarial Images: Turning a Weakness into a Strength

Adversarial Examples are not Bugs, they are Features

[1906.04948] Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers

[1904.12843] Adversarial Training for Free!

[1910.14356] Certifiable Robustness to Graph Perturbations

[1809.03113] Certified Adversarial Robustness with Additive Noise

[1905.12202] Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness

[1907.10764] Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training

[1906.04392] Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks

[1811.10745] ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies

Error Correcting Output Codes Improve Probability Estimation and Adversarial Robustness of Deep Neural Networks

[1910.06513] ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization

[1902.03538] Model Compression with Adversarial Robustness: A Unified Optimization Framework

[1905.00877] You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle

[1905.12784] Intrinsic dimension of data representations in deep neural networks

[1902.02041] Fooling Neural Network Interpretations via Adversarial Model Manipulation

[1908.01517] Adversarial Self-Defense for Cycle-Consistent GANs

[1909.05822] On the Hardness of Robust Classification

[1905.13021] Robustness to Adversarial Perturbations in Learning from Incomplete Data

[1905.13736] Unlabeled Data Improves Adversarial Robustness

[1905.13472] Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness

[1909.00900] Metric Learning for Adversarial Robustness

Cross-Modal Learning with Adversarial Samples

[1904.13000] Adversarial Training and Robustness for Multiple Perturbations

[1911.00126] Adversarial Music: Real World Audio Adversary Against Wake-word Detection System

[1906.04584] Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers

[1905.09768] Zero-shot Knowledge Transfer via Adversarial Belief Matching

[1907.02610] Adversarial Robustness through Local Linearization

[1905.13725] Are Labels Required for Improving Adversarial Robustness?

[1906.11235] Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness

[1906.06919] Improving Black-box Adversarial Attacks with a Transfer-based Prior

[1906.00001] Functional Adversarial Attacks

[1905.11736] Cross-Domain Transferability of Adversarial Perturbations

Certifying Geometric Robustness of Neural Networks

[1906.03526] Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks

[1903.08778] Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes

[1910.05821] Policy Poisoning in Batch Reinforcement Learning and Control

[1904.06288] Outlier-robust estimation of a sparse linear model using $\ell_1$-penalized Huber's $M$-estimator

[1911.04681] On Robustness to Adversarial Examples and Polynomial Optimization

[1905.09027] Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder

[1906.03849] Robustness Verification of Tree-based Models

[1905.09957] Robust Attribution Regularization

[1909.12272] Lower Bounds on Adversarial Robustness from Optimal Transport

[1906.07916] Convergence of Adversarial Training in Overparametrized Neural Networks

[1902.01148] Theoretical evidence for adversarial robustness through randomization

[1909.07830] [Extended version] Rethinking Deep Neural Network Ownership Verification: Embedding Passports to Defeat Ambiguity Attacks

On Relating Explanations and Adversarial Examples

Image Synthesis with a Single (Robust) Classifier

18

Hessian-based Analysis of Large Batch Training and Robustness to Adversaries

Deep Defense: Training DNNs with Improved Adversarial Robustness

Adversarial Examples that Fool both Computer Vision and Time-Limited Humans

With Friends Like These, Who Needs Adversaries?

Scaling provable adversarial defenses

Thwarting Adversarial Examples: An $L_0$-Robust Sparse Fourier Transform

A Spectral View of Adversarially Robust Features

Adversarially Robust Generalization Requires More Data

Towards Robust Detection of Adversarial Examples

Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples

Towards Robust Interpretability with Self-Explaining Neural Networks

Semidefinite relaxations for certifying robustness to adversarial examples

Adversarial Attacks on Stochastic Bandits

Adversarial vulnerability for any classifier

Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks

Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution

Constructing Unrestricted Adversarial Examples with Generative Models

Robust Detection of Adversarial Attacks by Modeling the Intrinsic Properties of Deep Neural Networks

Sparse DNNs with Improved Adversarial Robustness

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

17

Certified Defenses for Data Poisoning Attacks

Lower bounds on the robustness to adversarial perturbations

Houdini: Fooling Deep Structured Visual and Speech Recognition Models with Adversarial Examples

[1712.09665] Adversarial Patch

16

Robustness of classifiers: from adversarial to random noise