私の備忘録がないわね...私の...

画像処理とかプログラミングのお話。

CVPR15~22のadversarial examples関連論文リンク集

目視で判断したので、間違っていたり抜けてたりするかもしれませんが、ご容赦ください。

22

Improving Adversarially Robust Few-shot Image Classification with Generalizable Representations

Bounded Adversarial Attack on Deep Content Features

[2203.01439] Enhancing Adversarial Robustness for Deep Metric Learning

[2112.05379] Cross-Modal Transferable Adversarial Attacks from Images to Videos

[2203.05151] Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity

[2009.00097] Adversarial Eigen Attack on Black-Box Models

[1910.00982] Adversarially Robust Few-Shot Learning: A Meta-Learning Approach

[2203.05154] Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack

[1809.02918] Towards Query Efficient Black-box Attacks: An Input-free Perspective

[2203.03373] Adversarial Texture for Fooling Person Detectors in the Physical World

[2204.02738] Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network

[2201.05057] On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles

[2009.09258] Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection

[2203.06020] Enhancing Adversarial Training with Second-Order Statistics of Weights

[2111.12229] Subspace Adversarial Training

[2203.04041] Shape-invariant 3D Adversarial Point Clouds

[2203.12208] Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection

[2111.15121] Pyramid Adversarial Training Improves ViT Performance

[2105.10123] Backdoor Attacks on Self-Supervised Learning

[2203.03818] Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon

[2103.16255] Towards Understanding Adversarial Robustness of Optical Flow Networks

[2203.09566] Leveraging Adversarial Examples to Quantify Membership Information Leakage

[2111.12965] Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks

[2203.06616] LAS-AT: Adversarial Training with Learnable Attack Strategy

[2203.01584] Fairness-aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models

[2205.03546] Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees

[2006.08538] Boosting Black-Box Attack with Partially Transferred Conditional Adversarial Distribution

[2203.09123] Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input

[2203.15674] Exploring Frequency Adversarial Attacks for Face Forgery Detection

[2011.06922] Image Animation with Perturbed Masks

[2105.14727] Transferable Sparse Adversarial Attack

[2112.04532] Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection

[2012.12368] Understanding and Increasing Efficiency of Frank-Wolfe Adversarial Training

[2205.13383] BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning

[2203.13639] Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness

[2203.01925] Label-Only Model Inversion Attacks via Boundary Repulsion

21

Over-the-Air Adversarial Flickering Attacks Against Video Recognition Networks

Robust and Accurate Object Detection via Adversarial Learning

Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect

Natural Adversarial Examples

VideoMoCo: Contrastive Video Representation Learning With Temporally Adversarial Examples

Delving into Data: Effectively Substitute Training for Black-box Attack

SurFree: A Fast Surrogate-Free Black-Box Attack

LiBRe: A Practical Bayesian Approach to Adversarial Detection

QAIR: Practical Query-Efficient Black-Box Attacks for Image Retrieval

Prototype-Supervised Adversarial Network for Targeted Attack of Deep Hashing

Regularizing Neural Networks via Adversarial Model Perturbation

Exploring Adversarial Fake Images on Face Manifold

IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for Visual Object Tracking

Improving Transferability of Adversarial Patches on Face Recognition With Generative Models

Architectural Adversarial Robustness: The Case for Deep Pursuit

Adversarial Robustness Under Long-Tailed Distribution

Adversarial Imaging Pipelines

Achieving Robustness in Classification Using Optimal Transport With Hinge Regularization

Universal Spectral Adversarial Attacks for Deformable Shapes

Simulating Unknown Target Models for Query-Efficient Black-Box Attacks

Class-Aware Robust Adversarial Training for Object Detection

Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World

Improving the Transferability of Adversarial Samples With Adversarial Transformations

LAFEAT: Piercing Through Adversarial Defenses With Latent Features

How Robust Are Randomized Smoothing Based Defenses to Data Poisoning?

Understanding the Robustness of Skeleton-Based Action Recognition Under Adversarial Attack

Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink

MaxUp: Lightweight Adversarial Training With Data Augmentation Improves Neural Network Training

Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation

Explaining Classifiers Using Adversarial Perturbations on the Perceptual Ball

You See What I Want You To See: Exploring Targeted Black-Box Transferability Attack for Hash-Based Image Retrieval Systems

Can Audio-Visual Integration Strengthen Robustness Under Multimodal Attacks?

Adversarial Robustness Across Representation Spaces

20

Towards Transferable Targeted Attack

Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance

ColorFool: Semantic Adversarial Colorization

Achieving Robustness in the Wild via Adversarial Mixing With Disentangled Representations

Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs

Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking

Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization

Towards Verifying Robustness of Neural Networks Against A Family of Semantic Perturbations

Boosting the Transferability of Adversarial Samples via Attention

Learn2Perturb: An End-to-End Feature Perturbation Learning to Improve Adversarial Robustness

On Isometry Robustness of Deep 3D Point Cloud Models Under Adversarial Attacks

Adversarial Examples Improve Image Recognition

Efficient Adversarial Training With Transferable Adversarial Examples

Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning

Adversarial Texture Optimization From RGB-D Scans

Modeling Biological Immunity to Adversarial Examples

Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes

Enhancing Cross-Task Black-Box Transferability of Adversarial Examples With Dispersion Reduction

Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles

Benchmarking Adversarial Robustness on Image Classification

What It Thinks Is Important Is Important: Robustness Transfers Through Input Gradients

How Does Noise Help Robustness? Explanation and Exploration under the Neural SDE Framework

A Self-supervised Approach for Adversarial Robustness

Ensemble Generative Cleaning With Feedback Loops for Defending Adversarial Attacks

The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks

Projection & Probability-Driven Black-Box Attack

When NAS Meets Robustness: In Search of Robust Architectures Against Adversarial Attacks

QEBA: Query-Efficient Boundary-Based Blackbox Attack

Universal Physical Camouflage Attacks on Object Detectors

Defending Against Universal Attacks Through Selective Feature Regeneration

Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder

Robust Design of Deep Neural Networks Against Adversarial Attacks Based on Lyapunov Theory

Progressive Adversarial Networks for Fine-Grained Domain Adaptation

AdversarialNAS: Adversarial Neural Architecture Search for GANs

Cross-Domain Face Presentation Attack Detection via Multi-Domain Disentangled Representation Learning

EventSR: From Asynchronous Events to Image Reconstruction, Restoration, and Super-Resolution via End-to-End Adversarial Learning

Attack to Explain Deep Representation

Towards Universal Representation Learning for Deep Face Recognition

Non-Adversarial Video Synthesis With Learned Priors

What Machines See Is Not What They Get: Fooling Scene Text Recognition Models With Adversarial Text Images

Physically Realizable Adversarial Examples for LiDAR Object Detection

LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of Point Cloud Based Deep Networks

Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack

Adversarial Feature Hallucination Networks for Few-Shot Learning

Robust Superpixel-Guided Attentional Adversarial Attack

ILFO: Adversarial Attack on Adaptive Neural Networks

Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors

Smoothing Adversarial Domain Attack and P-Memory Reconsolidation for Cross-Domain Person Re-Identification

M2m: Imbalanced Classification via Major-to-minor Translation

19

Why ReLU Networks Yield High-Confidence Predictions Far Away From the Training Data and How to Mitigate the Problem

Feature Denoising for Improving Adversarial Robustness

Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples

Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness Against Adversarial Attack

Taking a Closer Look at Domain Shift: Category-Level Adversaries for Semantics Consistent Domain Adaptation

Improving Transferability of Adversarial Examples With Input Diversity

Exact Adversarial Attack to Image Captioning via Structured Output Learning With Latent Variables

Adversarial Attacks Beyond the Image Space

Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks

Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses

What Does It Mean to Learn in Deep Networks? And, How Does One Detect Adversarial Attacks?

Handwriting Recognition in Low-Resource Scripts Using Adversarial Learning

Adversarial Defense Through Network Profiling Based Path Extraction

Detection Based Defense Against Adversarial Examples From the Steganalysis Point of View

Curls & Whey: Boosting Black-Box Adversarial Attacks

Barrage of Random Transforms for Adversarially Robust Defense

Adversarial Inference for Multi-Sentence Video Description

MeshAdv: Adversarial Meshes for Visual Recognition

Disentangling Adversarial Robustness and Generalization

ShieldNets: Defending Against Adversarial Attacks Using Probabilistic Adversarial Robustness

Feature Space Perturbations Yield More Transferable Adversarial Examples

Defense Against Adversarial Images Using Web-Scale Nearest-Neighbor Search

SparseFool: A Few Pixels Make a Big Difference

Generating 3D Adversarial Point Clouds

Catastrophic Child's Play: Easy to Perform, Hard to Defend Adversarial Attacks

Defending Against Adversarial Attacks by Randomized Diversification

Rob-GAN: Generator, Discriminator, and Adversarial Attacker

Trust Region Based Adversarial Attack on Neural Networks

Adversarial Defense by Stratified Convolutional Sparse Coding

Retrieval-Augmented Convolutional Neural Networks Against Adversarial Examples

Robustness of 3D Deep Learning in an Adversarial Setting

[1904.08653] Fooling automated surveillance cameras: adversarial patches to attack person detection

18

On the Robustness of Semantic Segmentation Models to Adversarial Attacks

NAG: Network for Adversary Generation

Robust Physical-World Attacks on Deep Learning Visual Classification

Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser

Jointly Optimize Data Augmentation and Network Training: Adversarial Data Augmentation in Human Pose Estimation

Adversarially Learned One-Class Classifier for Novelty Detection

Defense Against Universal Adversarial Perturbations

Generative Adversarial Perturbations

Adversarially Occluded Samples for Person Re-Identification

Boosting Adversarial Attacks With Momentum

17

A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection

Universal Adversarial Perturbations

16

DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks

15

Deep Neural Networks Are Easily Fooled: High Confidence Predictions for Unrecognizable Images