私の備忘録がないわね...私の...

画像処理とかプログラミングのお話。

NeurIPS16~19のadversarial examples関連論文リンク集

目視で判断したので、間違っていたり抜けてたりするかもしれませんが、ご容赦ください。

19

[1910.07629] A New Defense Against Adversarial Images: Turning a Weakness into a Strength

Adversarial Examples are not Bugs, they are Features

[1906.04948] Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers

[1904.12843] Adversarial Training for Free!

[1910.14356] Certifiable Robustness to Graph Perturbations

[1809.03113] Certified Adversarial Robustness with Additive Noise

[1905.12202] Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness

[1907.10764] Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training

[1906.04392] Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks

[1811.10745] ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies

Error Correcting Output Codes Improve Probability Estimation and Adversarial Robustness of Deep Neural Networks

[1910.06513] ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization

[1902.03538] Model Compression with Adversarial Robustness: A Unified Optimization Framework

[1905.00877] You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle

[1905.12784] Intrinsic dimension of data representations in deep neural networks

[1902.02041] Fooling Neural Network Interpretations via Adversarial Model Manipulation

[1908.01517] Adversarial Self-Defense for Cycle-Consistent GANs

[1909.05822] On the Hardness of Robust Classification

[1905.13021] Robustness to Adversarial Perturbations in Learning from Incomplete Data

[1905.13736] Unlabeled Data Improves Adversarial Robustness

[1905.13472] Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness

[1909.00900] Metric Learning for Adversarial Robustness

Cross-Modal Learning with Adversarial Samples

[1904.13000] Adversarial Training and Robustness for Multiple Perturbations

[1911.00126] Adversarial Music: Real World Audio Adversary Against Wake-word Detection System

[1906.04584] Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers

[1905.09768] Zero-shot Knowledge Transfer via Adversarial Belief Matching

[1907.02610] Adversarial Robustness through Local Linearization

[1905.13725] Are Labels Required for Improving Adversarial Robustness?

[1906.11235] Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness

[1906.06919] Improving Black-box Adversarial Attacks with a Transfer-based Prior

[1906.00001] Functional Adversarial Attacks

[1905.11736] Cross-Domain Transferability of Adversarial Perturbations

Certifying Geometric Robustness of Neural Networks

[1906.03526] Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks

[1903.08778] Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes

[1910.05821] Policy Poisoning in Batch Reinforcement Learning and Control

[1904.06288] Outlier-robust estimation of a sparse linear model using $\ell_1$-penalized Huber's $M$-estimator

[1911.04681] On Robustness to Adversarial Examples and Polynomial Optimization

[1905.09027] Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder

[1906.03849] Robustness Verification of Tree-based Models

[1905.09957] Robust Attribution Regularization

[1909.12272] Lower Bounds on Adversarial Robustness from Optimal Transport

[1906.07916] Convergence of Adversarial Training in Overparametrized Neural Networks

[1902.01148] Theoretical evidence for adversarial robustness through randomization

[1909.07830] [Extended version] Rethinking Deep Neural Network Ownership Verification: Embedding Passports to Defeat Ambiguity Attacks

On Relating Explanations and Adversarial Examples

Image Synthesis with a Single (Robust) Classifier

18

Hessian-based Analysis of Large Batch Training and Robustness to Adversaries

Deep Defense: Training DNNs with Improved Adversarial Robustness

Adversarial Examples that Fool both Computer Vision and Time-Limited Humans

With Friends Like These, Who Needs Adversaries?

Scaling provable adversarial defenses

Thwarting Adversarial Examples: An $L_0$-Robust Sparse Fourier Transform

A Spectral View of Adversarially Robust Features

Adversarially Robust Generalization Requires More Data

Towards Robust Detection of Adversarial Examples

Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples

Towards Robust Interpretability with Self-Explaining Neural Networks

Semidefinite relaxations for certifying robustness to adversarial examples

Adversarial Attacks on Stochastic Bandits

Adversarial vulnerability for any classifier

Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks

Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution

Constructing Unrestricted Adversarial Examples with Generative Models

Robust Detection of Adversarial Attacks by Modeling the Intrinsic Properties of Deep Neural Networks

Sparse DNNs with Improved Adversarial Robustness

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

17

Certified Defenses for Data Poisoning Attacks

Lower bounds on the robustness to adversarial perturbations

Houdini: Fooling Deep Structured Visual and Speech Recognition Models with Adversarial Examples

[1712.09665] Adversarial Patch

16

Robustness of classifiers: from adversarial to random noise

ECCV18, 20のadversarial examples関連論文リンク集

目視で判断したので、間違っていたり抜けてたりするかもしれませんが、ご容赦ください。

20

Model-Agnostic Boundary-Adversarial Sampling for Test-Time Generalization in Few-Shot learning

Regularization with Latent Space Virtual Adversarial Training

Targeted Attack for Deep Hashing based Retrieval

Multitask Learning Strengthens Adversarial Robustness

Towards Automated Testing and Robustification by Semantic Adversarial Data Generation

Improved Adversarial Training via Learned Optimizer

Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks

Indirect Local Attacks for Context-aware Semantic Segmentation Networks

AdvPC: Transferable Adversarial Perturbations on 3D Point Clouds

Adversarial T-shirt! Evading Person Detectors in A Physical World

Bias-based Universal Adversarial Patch Attack for Automatic Check-out

SemanticAdv: Generating Adversarial Examples via Attribute-conditioned Image Editing

Adversarial Ranking Attack and Defense

Attract, Perturb, and Explore: Learning a Feature Alignment Network for Semi-supervised Domain Adaptation

Yet Another Intermediate-Level Attack

Boosting Decision-based Black-box Adversarial Attacks with Random Sign Flip

Spatiotemporal Attacks for Embodied Agents

Open-set Adversarial Defense

Semantic Equivalent Adversarial Data Augmentation for Visual Question Answering

Robust Tracking against Adversarial Attacks

Sparse Adversarial Attack via Perturbation Factorization

Square Attack: a query-efficient black-box adversarial attack via random search

Improving the Transferability of Adversarial Examples with Resized-Diverse-Inputs, Diversity-Ensemble and Region Fitting

Improving Query Efficiency of Black-box Adversarial Attack

What makes fake images detectable? Understanding properties that generalize

Efficient Adversarial Attacks for Visual Object Tracking

PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning

Practical Poisoning Attacks on Neural Networks

Improving Adversarial Robustness by Enforcing Local and Global Compactness

SPARK: Spatial-aware Online Incremental Attack Against Visual Tracking

Patch-wise Attack for Fooling Deep Neural Network

Defense Against Adversarial Attacks via Controlling Gradient Leaking on Embedded Manifolds

Manifold Projection for Adversarial Defense on Face Recognition

Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations

New Threats against Object Detector with Non-local Block

18

Practical Black-box Attacks on Deep Neural Networks using Efficient Query Mechanisms

Ask, Acquire and Attack: Data-free UAP generation using Class impressions

Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization

Learning Discriminative Video Representations Using Adversarial Perturbations

Transferable Adversarial Perturbations

ICCV17, 19のadversarial examples関連論文リンク集

目視で判断したので、間違っていたり抜けてたりするかもしれませんが、ご容赦ください。

19

Adversarial Robustness vs. Model Compression, or Both?

On the Design of Black-Box Adversarial Examples by Leveraging Gradient-Free Optimization and Operator Splitting Method

What Else Can Fool Deep Learning? Addressing Color Constancy Errors on Deep Neural Network Performance

Evaluating Robustness of Deep Image Super-Resolution Against Adversarial Attacks

Towards Adversarially Robust Object Detection

Generative Adversarial Minority Oversampling

DUP-Net: Denoiser and Upsampler Network for 3D Adversarial Point Clouds Defense

Fooling Network Interpretation in Image Classification

SpatialSense: An Adversarially Crowdsourced Benchmark for Spatial Relation Recognition

Adversarial Defense via Learning to Generate Diverse Attacks

Universal Adversarial Perturbation via Prior Driven Uncertainty Approximation

Understanding Deep Networks via Extremal Perturbations and Smooth Masks

Adversarial Feedback Loop

Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks

Adversarial Fine-Grained Composition Learning for Unseen Attribute-Object Recognition

AdvIT: Adversarial Frames Identifier Based on Temporal Consistency in Videos

Why Does a Visual Question Have Different Answers?

Sparse and Imperceivable Adversarial Attacks

Enhancing Adversarial Example Transferability With an Intermediate Level Attack

Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers

Hilbert-Based Generative Defense for Adversarial Examples

Physical Adversarial Textures That Fool Visual Object Tracking

The LogBarrier Adversarial Attack: Making Effective Use of Decision Boundary Information

Improving Adversarial Robustness via Guided Complement Entropy

Universal Perturbation Attack Against Image Retrieval

Defending Against Universal Perturbations With Shared Adversarial Training

Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks

Targeted Mismatch Adversarial Attack: Query With a Flower to Retrieve the Tower

Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks

CIIDefence: Defeating Adversarial Attacks by Fusing Class-Specific Image Inpainting and Image Denoising

FDA: Feature Disruptive Attack

advPattern: Physical-World Attacks on Deep Person Re-Identification via Adversarially Transformable Patterns

17

SafetyNet: Detecting and Rejecting Adversarial Examples Robustly

Adversarial Examples for Semantic Segmentation and Object Detection

Adversarial Image Perturbation for Privacy Protection -- A Game Theory Perspective

Interpretable Explanations of Black Boxes by Meaningful Perturbation

Universal Adversarial Perturbations Against Semantic Image Segmentation

Guided Perturbations: Self-Corrective Behavior in Convolutional Neural Networks

Adversarial Examples Detection in Deep Networks With Convolutional Filter Statistics

Adversarial Examples論文リンク集

CVPR

kamakuraviel.hatenablog.com

ICLR

kamakuraviel.hatenablog.com

ICML

kamakuraviel.hatenablog.com

NeurIPS

kamakuraviel.hatenablog.com

ICCV

kamakuraviel.hatenablog.com

ECCV

kamakuraviel.hatenablog.com

AAAI

kamakuraviel.hatenablog.com

IEEE Symposium on Security & Privacy

kamakuraviel.hatenablog.com

ICLR14~21のadversarial examples関連論文リンク集

目視で判断したので、間違っていたり抜けてたりするかもしれませんが、ご容赦ください。

21

Geometry-aware Instance-reweighted Adversarial Training | OpenReview

Improving Adversarial Robustness via Channel-wise Activation Suppressing | OpenReview

Deep Neural Network Fingerprinting by Conferrable Adversarial Examples | OpenReview

How Benign is Benign Overfitting ? | OpenReview

A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference | OpenReview

Stabilized Medical Image Attacks | OpenReview

Learning perturbation sets for robust machine learning | OpenReview

Removing Undesirable Feature Contributions Using Out-of-Distribution Data | OpenReview

Robust Reinforcement Learning on State Observations with Learned Optimal Adversary | OpenReview

Efficient Certified Defenses Against Patch Attacks on Image Classifiers | OpenReview

Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits | OpenReview

DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial Estimation | OpenReview

Understanding the failure modes of out-of-distribution generalization | OpenReview

Effective and Efficient Vote Attack on Capsule Networks | OpenReview

Contrastive Learning with Adversarial Perturbations for Conditional Text Generation | OpenReview

Bag of Tricks for Adversarial Training | OpenReview

Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds | OpenReview

Policy-Driven Attack: Learning to Query for Hard-label Black-box Adversarial Examples | OpenReview

Perceptual Adversarial Robustness: Defense Against Unseen Threat Models | OpenReview

Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks | OpenReview

On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning | OpenReview

Adversarially-Trained Deep Nets Transfer Better: Illustration on Image Classification | OpenReview

Online Adversarial Purification based on Self-supervised Learning | OpenReview

Self-supervised Adversarial Robustness for the Low-label, High-data Regime | OpenReview

Robust Overfitting may be mitigated by properly learned smoothening | OpenReview

WaNet - Imperceptible Warping-based Backdoor Attack | OpenReview

LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition | OpenReview

ARMOURED: Adversarially Robust MOdels using Unlabeled data by REgularizing Diversity | OpenReview

Fooling a Complete Neural Network Verifier | OpenReview

20

ICLR: EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness Against Adversarial Attacks

ICLR: Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks

ICLR: Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets

ICLR: Sign-OPT: A Query-Efficient Hard-label Adversarial Attack

ICLR: FreeLB: Enhanced Adversarial Training for Natural Language Understanding

ICLR: Empirical Studies on the Properties of Linear Regions in Deep Neural Networks

ICLR: Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions

ICLR: Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness

ICLR: Provable robustness against all adversarial $l_p$-perturbations for $p\geq 1$

ICLR: Jacobian Adversarially Regularized Networks for Robustness

ICLR: Implicit Bias of Gradient Descent based Adversarial Training on Separable Data

ICLR: Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier

ICLR: MMA Training: Direct Input Space Margin Maximization through Adversarial Training

ICLR: Adversarial Policies: Attacking Deep Reinforcement Learning

ICLR: MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius

ICLR: Enhancing Adversarial Defense by k-Winners-Take-All

ICLR: Intriguing Properties of Adversarial Training at Scale

ICLR: Biologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks

ICLR: Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking

ICLR: Certified Defenses for Adversarial Patches

ICLR: Fast is better than free: Revisiting adversarial training

ICLR: Towards neural networks that provably know when they don't know

ICLR: GAT: Generative Adversarial Training for Adversarial Example Detection and Classification

ICLR: Improving Adversarial Robustness Requires Revisiting Misclassified Examples

ICLR: BayesOpt Adversarial Attack

ICLR: Robust anomaly detection and backdoor attack detection via differential privacy

ICLR: Adversarially Robust Representations with Smooth Encoders

ICLR: Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks

ICLR: Unrestricted Adversarial Examples via Semantic Manipulation

ICLR: Adversarial AutoAugment

ICLR: Defending Against Physically Realizable Attacks on Image Classification

ICLR: Black-Box Adversarial Attack with Transferable Model-based Embedding

ICLR: Robust Local Features for Improving the Generalization of Adversarial Training

ICLR: BREAKING CERTIFIED DEFENSES: SEMANTIC ADVERSARIAL EXAMPLES WITH SPOOFED ROBUSTNESS CERTIFICATES

ICLR: Optimal Strategies Against Generative Attacks

ICLR: Adversarially robust transfer learning

ICLR: Sign Bits Are All You Need for Black-Box Attacks

ICLR: Federated Adversarial Domain Adaptation

ICLR: Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness

ICLR: Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks

ICLR: Transferable Perturbations of Deep Feature Distributions

ICLR: Adversarial Training and Provable Defenses: Bridging the Gap

ICLR: Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing

19

CAMOU: Learning Physical Vehicle Camouflages to Adversarially Attack Detectors in the Wild | OpenReview

Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer | OpenReview

PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks | OpenReview

Boosting Robustness Certification of Neural Networks | OpenReview

ADef: an Iterative Algorithm to Construct Adversarial Deformations | OpenReview

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations | OpenReview

On the Sensitivity of Adversarial Robustness to Input Data Distributions | OpenReview

Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer | OpenReview

Robustness May Be at Odds with Accuracy | OpenReview

Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network | OpenReview

Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability | OpenReview

Don't let your Discriminator be fooled | OpenReview

Cost-Sensitive Robustness against Adversarial Examples | OpenReview

Are adversarial examples inevitable? | OpenReview

Prior Convictions: Black-box Adversarial Attacks with Bandits and Priors | OpenReview

Adversarial Attacks on Graph Neural Networks via Meta Learning | OpenReview

Structured Adversarial Attack: Towards General Implementation and Better Interpretability | OpenReview

Adversarial Reprogramming of Neural Networks | OpenReview

Excessive Invariance Causes Adversarial Vulnerability | OpenReview

SPIGAN: Privileged Adversarial Learning from Simulation | OpenReview

Towards the first adversarially robust neural network model on MNIST | OpenReview

Improving the Generalization of Adversarial Training with Domain Adaptation | OpenReview

Generalizable Adversarial Training via Spectral Normalization | OpenReview

Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures | OpenReview

ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness | OpenReview

The Limitations of Adversarial Training and the Blind-Spot Attack | OpenReview

Combinatorial Attacks on Binarized Neural Networks | OpenReview

Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach | OpenReview

Characterizing Audio Adversarial Examples Using Temporal Dependency | OpenReview

18

Spatially Transformed Adversarial Examples | OpenReview

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models | OpenReview

Towards Deep Learning Models Resistant to Adversarial Attacks | OpenReview

Decision Boundary Analysis of Adversarial Examples | OpenReview

Stochastic Activation Pruning for Robust Adversarial Defense | OpenReview

Cascade Adversarial Machine Learning Regularized with a Unified Embedding | OpenReview

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality | OpenReview

Thermometer Encoding: One Hot Way To Resist Adversarial Examples | OpenReview

Certified Defenses against Adversarial Examples | OpenReview

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models | OpenReview

Adversarial Dropout Regularization | OpenReview

Generating Natural Adversarial Examples | OpenReview

Countering Adversarial Images using Input Transformations | OpenReview

Ensemble Adversarial Training: Attacks and Defenses | OpenReview

Certifying Some Distributional Robustness with Principled Adversarial Training | OpenReview

PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples | OpenReview

Combating Adversarial Attacks Using Sparse Representations | OpenReview

On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples | OpenReview

Attacking the Madry Defense Model with $L_1$-based Adversarial Examples | OpenReview

Black-box Attacks on Deep Neural Networks via Gradient Estimation | OpenReview

Intriguing Properties of Adversarial Examples | OpenReview

Adversarial Policy Gradient for Alternating Markov Games | OpenReview

17

[1611.01236] Adversarial Machine Learning at Scale

[1702.04267] On Detecting Adversarial Perturbations

[1611.02770] Delving into Transferable Adversarial Examples and Black-box Attacks

[1607.02533] Adversarial examples in the physical world

16

[1511.05122] Adversarial Manipulation of Deep Representations

15

[1412.6572] Explaining and Harnessing Adversarial Examples

[1412.5068] Towards Deep Neural Network Architectures Robust to Adversarial Examples

14

[1312.6199] Intriguing properties of neural networks

ChromeのブックマークをPythonで扱う

環境

目的

Bookmark のタイトルと URL を Markdown 形式で記述する.

PC 内に保存された JSON ファイルを用いる方法

import json

path = 'C:\\Users\\{ユーザー名}\\AppData\\Local\\Google\\Chrome\\User Data\\Default\\Bookmarks' # windows

with open(path, encoding='utf-8') as f:
    js_dict = json.load(f)

bookmark_list = js_dict['roots']['bookmark_bar']['children']
for bookmark in bookmark_list:
    if bookmark['type'] == 'url':
        bookmark_name = bookmark['name']
        url = bookmark['url']
        print(f'[{bookmark_name}]({url})')
        print()

エキスポートされた HTML ファイルを使う方法

pip3 install beautifulsoup4
from bs4 import BeautifulSoup

path = '/Users/kumano/Documents/bookmarks_2022_06_11.html'

with open(path, encoding='utf-8') as f:
    contents = f.read()

soup = BeautifulSoup(contents, features='html.parser')
link_list = soup.find_all('a')
for link in link_list:
    url = link.get('href')
    title = link.string
    print(f'[{title}]({url})')
    print()

結果

[Why ReLU Networks Yield High-Confidence Predictions Far Away From the Training Data and How to Mitigate the Problem](https://openaccess.thecvf.com/content_CVPR_2019/papers/Hein_Why_ReLU_Networks_Yield_High-Confidence_Predictions_Far_Away_From_the_CVPR_2019_paper.pdf)

[Feature Denoising for Improving Adversarial Robustness](https://openaccess.thecvf.com/content_CVPR_2019/papers/Xie_Feature_Denoising_for_Improving_Adversarial_Robustness_CVPR_2019_paper.pdf)

CVPR15~22のadversarial examples関連論文リンク集

目視で判断したので、間違っていたり抜けてたりするかもしれませんが、ご容赦ください。

22

Improving Adversarially Robust Few-shot Image Classification with Generalizable Representations

Bounded Adversarial Attack on Deep Content Features

[2203.01439] Enhancing Adversarial Robustness for Deep Metric Learning

[2112.05379] Cross-Modal Transferable Adversarial Attacks from Images to Videos

[2203.05151] Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity

[2009.00097] Adversarial Eigen Attack on Black-Box Models

[1910.00982] Adversarially Robust Few-Shot Learning: A Meta-Learning Approach

[2203.05154] Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack

[1809.02918] Towards Query Efficient Black-box Attacks: An Input-free Perspective

[2203.03373] Adversarial Texture for Fooling Person Detectors in the Physical World

[2204.02738] Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network

[2201.05057] On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles

[2009.09258] Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection

[2203.06020] Enhancing Adversarial Training with Second-Order Statistics of Weights

[2111.12229] Subspace Adversarial Training

[2203.04041] Shape-invariant 3D Adversarial Point Clouds

[2203.12208] Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection

[2111.15121] Pyramid Adversarial Training Improves ViT Performance

[2105.10123] Backdoor Attacks on Self-Supervised Learning

[2203.03818] Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon

[2103.16255] Towards Understanding Adversarial Robustness of Optical Flow Networks

[2203.09566] Leveraging Adversarial Examples to Quantify Membership Information Leakage

[2111.12965] Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks

[2203.06616] LAS-AT: Adversarial Training with Learnable Attack Strategy

[2203.01584] Fairness-aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models

[2205.03546] Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees

[2006.08538] Boosting Black-Box Attack with Partially Transferred Conditional Adversarial Distribution

[2203.09123] Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input

[2203.15674] Exploring Frequency Adversarial Attacks for Face Forgery Detection

[2011.06922] Image Animation with Perturbed Masks

[2105.14727] Transferable Sparse Adversarial Attack

[2112.04532] Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection

[2012.12368] Understanding and Increasing Efficiency of Frank-Wolfe Adversarial Training

[2205.13383] BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning

[2203.13639] Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness

[2203.01925] Label-Only Model Inversion Attacks via Boundary Repulsion

21

Over-the-Air Adversarial Flickering Attacks Against Video Recognition Networks

Robust and Accurate Object Detection via Adversarial Learning

Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect

Natural Adversarial Examples

VideoMoCo: Contrastive Video Representation Learning With Temporally Adversarial Examples

Delving into Data: Effectively Substitute Training for Black-box Attack

SurFree: A Fast Surrogate-Free Black-Box Attack

LiBRe: A Practical Bayesian Approach to Adversarial Detection

QAIR: Practical Query-Efficient Black-Box Attacks for Image Retrieval

Prototype-Supervised Adversarial Network for Targeted Attack of Deep Hashing

Regularizing Neural Networks via Adversarial Model Perturbation

Exploring Adversarial Fake Images on Face Manifold

IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for Visual Object Tracking

Improving Transferability of Adversarial Patches on Face Recognition With Generative Models

Architectural Adversarial Robustness: The Case for Deep Pursuit

Adversarial Robustness Under Long-Tailed Distribution

Adversarial Imaging Pipelines

Achieving Robustness in Classification Using Optimal Transport With Hinge Regularization

Universal Spectral Adversarial Attacks for Deformable Shapes

Simulating Unknown Target Models for Query-Efficient Black-Box Attacks

Class-Aware Robust Adversarial Training for Object Detection

Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World

Improving the Transferability of Adversarial Samples With Adversarial Transformations

LAFEAT: Piercing Through Adversarial Defenses With Latent Features

How Robust Are Randomized Smoothing Based Defenses to Data Poisoning?

Understanding the Robustness of Skeleton-Based Action Recognition Under Adversarial Attack

Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink

MaxUp: Lightweight Adversarial Training With Data Augmentation Improves Neural Network Training

Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation

Explaining Classifiers Using Adversarial Perturbations on the Perceptual Ball

You See What I Want You To See: Exploring Targeted Black-Box Transferability Attack for Hash-Based Image Retrieval Systems

Can Audio-Visual Integration Strengthen Robustness Under Multimodal Attacks?

Adversarial Robustness Across Representation Spaces

20

Towards Transferable Targeted Attack

Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance

ColorFool: Semantic Adversarial Colorization

Achieving Robustness in the Wild via Adversarial Mixing With Disentangled Representations

Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs

Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking

Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization

Towards Verifying Robustness of Neural Networks Against A Family of Semantic Perturbations

Boosting the Transferability of Adversarial Samples via Attention

Learn2Perturb: An End-to-End Feature Perturbation Learning to Improve Adversarial Robustness

On Isometry Robustness of Deep 3D Point Cloud Models Under Adversarial Attacks

Adversarial Examples Improve Image Recognition

Efficient Adversarial Training With Transferable Adversarial Examples

Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning

Adversarial Texture Optimization From RGB-D Scans

Modeling Biological Immunity to Adversarial Examples

Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes

Enhancing Cross-Task Black-Box Transferability of Adversarial Examples With Dispersion Reduction

Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles

Benchmarking Adversarial Robustness on Image Classification

What It Thinks Is Important Is Important: Robustness Transfers Through Input Gradients

How Does Noise Help Robustness? Explanation and Exploration under the Neural SDE Framework

A Self-supervised Approach for Adversarial Robustness

Ensemble Generative Cleaning With Feedback Loops for Defending Adversarial Attacks

The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks

Projection & Probability-Driven Black-Box Attack

When NAS Meets Robustness: In Search of Robust Architectures Against Adversarial Attacks

QEBA: Query-Efficient Boundary-Based Blackbox Attack

Universal Physical Camouflage Attacks on Object Detectors

Defending Against Universal Attacks Through Selective Feature Regeneration

Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder

Robust Design of Deep Neural Networks Against Adversarial Attacks Based on Lyapunov Theory

Progressive Adversarial Networks for Fine-Grained Domain Adaptation

AdversarialNAS: Adversarial Neural Architecture Search for GANs

Cross-Domain Face Presentation Attack Detection via Multi-Domain Disentangled Representation Learning

EventSR: From Asynchronous Events to Image Reconstruction, Restoration, and Super-Resolution via End-to-End Adversarial Learning

Attack to Explain Deep Representation

Towards Universal Representation Learning for Deep Face Recognition

Non-Adversarial Video Synthesis With Learned Priors

What Machines See Is Not What They Get: Fooling Scene Text Recognition Models With Adversarial Text Images

Physically Realizable Adversarial Examples for LiDAR Object Detection

LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of Point Cloud Based Deep Networks

Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack

Adversarial Feature Hallucination Networks for Few-Shot Learning

Robust Superpixel-Guided Attentional Adversarial Attack

ILFO: Adversarial Attack on Adaptive Neural Networks

Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors

Smoothing Adversarial Domain Attack and P-Memory Reconsolidation for Cross-Domain Person Re-Identification

M2m: Imbalanced Classification via Major-to-minor Translation

19

Why ReLU Networks Yield High-Confidence Predictions Far Away From the Training Data and How to Mitigate the Problem

Feature Denoising for Improving Adversarial Robustness

Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples

Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness Against Adversarial Attack

Taking a Closer Look at Domain Shift: Category-Level Adversaries for Semantics Consistent Domain Adaptation

Improving Transferability of Adversarial Examples With Input Diversity

Exact Adversarial Attack to Image Captioning via Structured Output Learning With Latent Variables

Adversarial Attacks Beyond the Image Space

Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks

Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses

What Does It Mean to Learn in Deep Networks? And, How Does One Detect Adversarial Attacks?

Handwriting Recognition in Low-Resource Scripts Using Adversarial Learning

Adversarial Defense Through Network Profiling Based Path Extraction

Detection Based Defense Against Adversarial Examples From the Steganalysis Point of View

Curls & Whey: Boosting Black-Box Adversarial Attacks

Barrage of Random Transforms for Adversarially Robust Defense

Adversarial Inference for Multi-Sentence Video Description

MeshAdv: Adversarial Meshes for Visual Recognition

Disentangling Adversarial Robustness and Generalization

ShieldNets: Defending Against Adversarial Attacks Using Probabilistic Adversarial Robustness

Feature Space Perturbations Yield More Transferable Adversarial Examples

Defense Against Adversarial Images Using Web-Scale Nearest-Neighbor Search

SparseFool: A Few Pixels Make a Big Difference

Generating 3D Adversarial Point Clouds

Catastrophic Child's Play: Easy to Perform, Hard to Defend Adversarial Attacks

Defending Against Adversarial Attacks by Randomized Diversification

Rob-GAN: Generator, Discriminator, and Adversarial Attacker

Trust Region Based Adversarial Attack on Neural Networks

Adversarial Defense by Stratified Convolutional Sparse Coding

Retrieval-Augmented Convolutional Neural Networks Against Adversarial Examples

Robustness of 3D Deep Learning in an Adversarial Setting

[1904.08653] Fooling automated surveillance cameras: adversarial patches to attack person detection

18

On the Robustness of Semantic Segmentation Models to Adversarial Attacks

NAG: Network for Adversary Generation

Robust Physical-World Attacks on Deep Learning Visual Classification

Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser

Jointly Optimize Data Augmentation and Network Training: Adversarial Data Augmentation in Human Pose Estimation

Adversarially Learned One-Class Classifier for Novelty Detection

Defense Against Universal Adversarial Perturbations

Generative Adversarial Perturbations

Adversarially Occluded Samples for Person Re-Identification

Boosting Adversarial Attacks With Momentum

17

A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection

Universal Adversarial Perturbations

16

DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks

15

Deep Neural Networks Are Easily Fooled: High Confidence Predictions for Unrecognizable Images