Show simple item record

dc.contributor.authorHall, Robert
dc.contributor.authorChen, Justin
dc.date.accessioned2026-02-17T19:53:16Z
dc.date.available2026-02-17T19:53:16Z
dc.date.issued2026-02-17
dc.identifier.urihttps://hdl.handle.net/1721.1/164899
dc.description.abstractOptical systems (telescopes, lasers, microscopes, etc.) have degraded performance over long distances due to scintillation caused by Earth’s atmosphere, where adaptive optics (AO) is often used to enhance its signal-to-noise (SNR) ratio or image quality. Astronomers have found success in laser-based adaptive optics where they survey the atmosphere with a laser and subtract its effects on the resultant image. Although effective in most cases, these systems can be extremely costly, are computationally intensive in real time, and fall short in some edge cases. We propose an autoencoder/ decoder and a generalized sequence to sequence model (LSTM) as a cost-effective method to off-load computational complexity from real time and enhance performance in edge cases. This study utilizes four simulated datasets of wavefront sensor frames for a variety of atmospheric conditions, done in collaboration with MIT Lincoln Laboratory [1]–found auto-encoding performance just shy of traditional methodology and found LSTM performance that predicts well the general shape on the WFS, but suffers from scaling issues.en_US
dc.description.sponsorshipDepartment of the Air Force Artificial Intelligence Acceleratoren_US
dc.language.isoen_USen_US
dc.subjectAdaptive Opticsen_US
dc.titleMachine Learning for the Enhancement of Adaptive Opticsen_US
dc.typeTechnical Reporten_US
dc.contributor.departmentLincoln Laboratoryen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record