MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Analog On-chip Training and Inference with Non-volatile Memory Devices

Author(s)
Lee, Jungsoo
Thumbnail
DownloadThesis PDF (3.422Mb)
Advisor
del Alamo, Jesús A.
Terms of use
In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
As the demand for computation in neural networks continues to rise, conventional computing resources are increasingly constrained by their limited energy efficiency. One promising solution to this challenge is analog in-memory computing (AIMC), which enables efficient matrix-vector multiplications by encoding synaptic weights into the conductance of nonvolatile memory devices. These devices are structured into crossbar arrays. To explore the potential of non-volatile memory devices in AIMC, investigations involve simulating crossbar array operations using IBM’s AIHWKIT. With this tool, I investigate the implementation of various analog computing algorithms, including TikiTaka. AIMC is evaluated for simple MNIST classification tasks and more complex deep learning models, Long Short-Term Memory (LSTM) networks. I demonstrate that devices can be categorized based on their asymmetry and non-linear weight modulation behavior. Performance improvements through the Tikitaka algorithm are observed only when the device provides a sufficient converge-dragging force; otherwise, the algorithm may even degrade performance. I also investigate how pulse-to-pulse noise and device-to-device variability affect system performance, as well as how different peripheral circuit configurations influence the overall behavior. Finally, I propose an Analog Low-Rank Adapter (Analog LoRA) by applying analog computing to the fine-tuning of large language models. I explore the necessary conditions for Analog LoRA to achieve performance comparable to its digital counterpart. Based on these findings, I present design guidelines for effectively applying analog computing to various machine learning tasks on edge devices.
Date issued
2025-05
URI
https://hdl.handle.net/1721.1/163681
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.