Show simple item record

dc.contributor.advisorKagal, Lalana
dc.contributor.authorAlotaibi, Abdulrahman
dc.date.accessioned2026-04-21T20:43:11Z
dc.date.available2026-04-21T20:43:11Z
dc.date.issued2025-09
dc.date.submitted2025-09-18T18:29:30.725Z
dc.identifier.urihttps://hdl.handle.net/1721.1/165585
dc.description.abstractFederated Learning (FL) enables collaborative training of machine learning models without centralizing raw data, offering a practical framework for real-world AI. Yet real-world deployments face challenges in data-constrained environments, where client datasets are both limited and heterogeneous, as well as from broader adversarial risks and complex regulatory requirements. This dissertation addresses these challenges by integrating Knowledge Evolution (KE) and Later-Layer Forgetting (LLF) into the FL paradigm, and by analyzing their combined impact on security and compliance. The proposed FL-KE and FL-LLF frameworks introduce selective forgetting mechanisms that prune less salient representations, reallocate model capacity, and enable iterative refinement over generations. Experimental evaluations on diverse image classification datasets, including Flower-102, CUB-200, MIT-67, and Stanford Dogs, demonstrate accelerated convergence, improved generalization, and robustness under data scarcity compared to baseline FL. Beyond performance, this work examines the security implications of FL-KE and FL-LLF through a comprehensive threat model covering poisoning, backdoor, inference, free-rider, Sybil, and Byzantine attacks. Analysis reveals that selective forgetting can reduce the persistence of malicious updates, mitigating certain attack vectors while coexisting with robust aggregation and secure aggregation protocols. Finally, this dissertation explores the intersection of FL and data privacy regulations through an empirical survey of stakeholders across the Gulf Cooperation Council (GCC) region. The findings reveal a gap between regulatory awareness and operational compliance, as well as opportunities for FL especially in its KE and LLF enhanced forms to align with legal principles of data minimization, purpose limitation, and user rights. By combining methodological advances, defenses against adversarial threats, and attention to regulatory requirements, this work offers a framework for building the next generation of federated learning systems that are effective, secure, and compliant in varied settings. These contributions also support the broader goal of trusted and safe machine learning, where the demand for robust, data-sharing, and regulation-aware systems is central to preventing harmful outcomes, promoting fairness, and protecting the integrity of AI in sensitive fields such as healthcare, finance, and government. The findings presented here carry direct implications for deploying federated AI in high-stakes environments and highlight promising directions for future research at the intersection of machine learning, security, and policy.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleAdvanced Federated Learning Algorithms Leveraging Selective Forgetting for Data-Constrained Environments
dc.typeThesis
dc.description.degreePh.D.
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)
mit.thesis.degreeDoctoral
thesis.degree.nameDoctor of Philosophy


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record