Seeing Beyond Limits with Physics-Informed Priors
Author(s)
Liu, Yang
DownloadThesis PDF (32.30Mb)
Advisor
Durand, Frédo
Terms of use
Metadata
Show full item recordAbstract
Conventional imaging systems are limited by dimensionality and visibility: standard sensors capture only two-dimensional data, while light diffuses or scatters across surfaces and through complex media. This dissertation reformulates imaging as an interplay of optical encoding and neural decoding. It models forward physical processes and iteratively refines them using deep denoisers. By embedding physics-informed priors into this optimization, it aims to surpass conventional limits in dimensionality and visibility. First, I develop Privacy Dual Imaging using an ambient light sensor. This approach tackles both dimensionality and visibility challenges when imaging with a single-point, non-imaging component on smart devices. Inspired by 1984’s “Big Brother” telescreen, I demonstrate how subtle light intensity fluctuations can reveal unseen image information; however, the goal is to highlight privacy concerns, not exploit them. It addresses two visibility limits—pixel-less and lens-less imaging—by using the screen as a spatial modulator and exploiting involuntary motion to create a virtual pinhole effect. A quantized, physics-informed prior improves reconstruction from heavily quantized sensor measurements. Second, I propose Snapshot Compressive Imaging (SCI) augmented with deep plug-and-play physics-informed priors to overcome the dimensionality limit of 2D sensors. SCI compressively encodes multiple temporal, spectral, or angular frames into a single measurement. A deep plug-and-play prior algorithm introduces high-dimensional priors learned from images and videos into the iterative reconstruction process, improving fidelity, speed, and flexibility. Experiments show notable gains in reconstruction quality and efficiency across different SCI datasets, including largeformat 4K UHD scenarios. Third, I introduce Rank-Reduced physics-informed priors, showing that large pretrained AI models—especially diffusion models—can act as general visual priors across both dimensionality and visibility challenges. A relax-then-tighten strategy handles ill-conditioning by applying truncated singular value decomposition to reduce rank deficiencies, followed by a Stable Diffusion refiner (SDEdit) plug-and-play prior that constrains reconstructions to valid image spaces. Simulations and passive non-line-of-sight imaging experiments verify the approach’s stability and effectiveness. Physics-informed priors promise to extend the boundaries of imaging, enabling us to see beyond current dimensionality and visibility limits and to unlock new applications from macro-scale to micro-scale observations.
Date issued
2025-09Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology