MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Oblivious Defense in ML Models: Backdoor Removal without Detection

Author(s)
Goldwasser, Shafi; Shafer, Jonathan; Vafa, Neekon; Vaikuntanathan, Vinod
Thumbnail
Download3717823.3718245.pdf (928.9Kb)
Publisher with Creative Commons License

Publisher with Creative Commons License

Creative Commons Attribution

Terms of use
Creative Commons Attribution-Noncommercial-ShareAlike http://creativecommons.org/licenses/by-nc-sa/4.0/
Metadata
Show full item record
Abstract
As society grows more reliant on machine learning, ensuring the security of machine learning systems against sophisticated attacks becomes a pressing concern. A recent result of Goldwasser, Kim, Vaikuntanathan, and Zamir (FOCS ’22) shows that an adversary can plant undetectable backdoors in machine learning models, allowing the adversary to covertly control the model’s behavior. Backdoors can be planted in such a way that the backdoored machine learning model is computationally indistinguishable from an honest model without backdoors. In this paper, we present strategies for defending against backdoors in ML models, even if they are undetectable. The key observation is that it is sometimes possible to provably mitigate or even remove backdoors without needing to detect them, using techniques inspired by the notion of random self-reducibility. This depends on properties of the ground-truth labels (chosen by nature), and not of the proposed ML model (which may be chosen by an attacker). We give formal definitions for secure backdoor mitigation, and proceed to show two types of results. First, we show a “global mitigation” technique, which removes all backdoors from a machine learning model under the assumption that the ground-truth labels are close to a Fourier-heavy function. Second, we consider distributions where the ground-truth labels are close to a linear or polynomial function in ℝn. Here, we show “local mitigation” techniques, which remove backdoors with high probability for every input of interest, and are computationally cheaper than global mitigation. All of our constructions are black-box, so our techniques work without needing access to the model’s representation (i.e., its code or parameters). Along the way we prove a simple result for robust mean estimation.
Description
STOC ’25, Prague, Czechia
Date issued
2025-06-15
URI
https://hdl.handle.net/1721.1/164614
Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Publisher
ACM|Proceedings of the 57th Annual ACM Symposium on Theory of Computing
Citation
Shafi Goldwasser, Jonathan Shafer, Neekon Vafa, and Vinod Vaikuntanathan. 2025. Oblivious Defense in ML Models: Backdoor Removal without Detection. In Proceedings of the 57th Annual ACM Symposium on Theory of Computing (STOC '25). Association for Computing Machinery, New York, NY, USA, 1785–1794.
Version: Final published version
ISBN
979-8-4007-1510-5

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.