| dc.contributor.advisor | Mądry, Aleksander | |
| dc.contributor.author | Khaddaj, Alaa | |
| dc.date.accessioned | 2026-01-29T15:05:12Z | |
| dc.date.available | 2026-01-29T15:05:12Z | |
| dc.date.issued | 2025-09 | |
| dc.date.submitted | 2025-09-15T14:41:04.887Z | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/164640 | |
| dc.description.abstract | Data has been been playing an increasingly more important role in the machine learning (ML) pipeline. This thesis deepens the understanding of the effect of the data on model performance and reliability. First, we study how choice of training data affects model performance. We consider a transfer learning setting and present a framework for selecting from a large pool of data a pretraining subset that improves model performance on downstream tasks. Our approach, however, requires training multiple target models which becomes prohibitively expensive at large-scale. To that end, we explore using smaller—and cheaper—proxy models to approximate large model behavior and select the pretraining data using that cheaper model. We show the effectiveness of this approach in two dataset selection settings: language modeling and imitation learning. Second, we explore the role of data in model reliability and consider two different threat models: backdoor attacks and malicious data editing. In this first threat model, an adversary injects a few doctered samples into the training set to control model predictions at inference time. We study the effect of these malicious samples on model behavior and then propose a framework for detecting and removing them from the training data. In the second threat model, an adversary leverages generative models such as diffusion models to maliciously modify personal data and generate harmful digital content. We focus on image editing and investigate how we can imperceptibly modify personal images to mitigate editing using diffusion models and raise and the cost of hamrful content generation. Overall, this thesis contributes to the understanding of the role of the data in driving model behavior. Through these efforts, we aim to provide mechanisms for (i) training models that perform better and (ii) are more reliable when deployed in the real world. | |
| dc.publisher | Massachusetts Institute of Technology | |
| dc.rights | In Copyright - Educational Use Permitted | |
| dc.rights | Copyright retained by author(s) | |
| dc.rights.uri | https://rightsstatements.org/page/InC-EDU/1.0/ | |
| dc.title | How Data Drives ML Models Performance | |
| dc.type | Thesis | |
| dc.description.degree | Ph.D. | |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
| mit.thesis.degree | Doctoral | |
| thesis.degree.name | Doctor of Philosophy | |