Show simple item record

dc.contributor.authorAparne, Gupta
dc.contributor.authorBanburski, Andrzej
dc.contributor.authorPoggio, Tomaso
dc.date.accessioned2022-03-30T18:19:38Z
dc.date.available2022-03-30T18:19:38Z
dc.date.issued2022-03-30
dc.identifier.urihttps://hdl.handle.net/1721.1/141424
dc.description.abstractNeural network classifiers are known to be highly vulnerable to adversarial perturbations in their inputs. Under the hypothesis that adversarial examples lie outside of the sub-manifold of natural images, previous work has investigated the impact of principal components in data on adversarial robustness. In this paper we show that there exists a very simple defense mechanism in the case where adversarial images are separable in a previously defined $(k,p)$ metric. This defense is very successful against the popular Carlini-Wagner attack, but less so against some other common attacks like FGSM. It is interesting to note that the defense is still successful for relatively large perturbations.en_US
dc.description.sponsorshipThis material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216.en_US
dc.publisherCenter for Brains, Minds and Machines (CBMM)en_US
dc.relation.ispartofseriesCBMM Memo;135
dc.titlePCA as a defense against some adversariesen_US
dc.typeArticleen_US
dc.typeTechnical Reporten_US
dc.typeWorking Paperen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record